top | item 47073880

Show HN: EasyMemory – 100% local memory layer and MCP for LLMs

2 points| justvugg | 11 days ago

Hi everyone, I have created EasyMemory: a lightweight, fully local memory backend for chatbots, agents and any MCP-compatible LLM (Claude, GPT, Gemini, Ollama…). Key points: • Auto-saves every conversation • Ingests PDFs, DOCX, Markdown vaults, folders • Hybrid retrieval: vector + keyword + graph (no extra libs needed) • Built-in MCP server → plug into Claude Desktop, custom agents, etc. • 100% offline, data in ~/.easymemory • Enterprise extras: OAuth2, API keys, rate limiting, audit logs • Bonus: Slack JSON import, Notion/GDrive folder indexing Quick start (MCP server):

easymemory-server --port 8100

Then point Claude Desktop or your agent to http://localhost:8100/mcp. Or chat with Ollama:

easymemory-agent --provider ollama --model llama3.1:8b

Python usage:

from easymemory.agent import EasyMemoryAgent async with EasyMemoryAgent(llm_provider="ollama", model="llama3.1:8b") as agent: print(await agent.chat("Remember: I prefer dark mode.")) # Later... print(await agent.chat("What UI do I prefer?")) # → "You prefer dark mode"

MIT licensed, minimal deps, early stage. Repo: https://github.com/JustVugg/easymemory Looking for feedback on: • What retrieval mix works best for your long-term memory needs? • Pain points with current local memory solutions? • Nice-to-have integrations? Thanks!

discuss

order

No comments yet.