MrBoor/memfabric
Self-organizing agent memory. No embeddings, no vector DB.
Platform-specific configuration:
{
"mcpServers": {
"memfabric": {
"command": "npx",
"args": [
"-y",
"memfabric"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
Self-organizing agent memory. No embeddings, no vector DB.
MemFabric is a memory system where the LLM organizes its own knowledge as plain markdown files with descriptive filenames. There is no vector database, no embedding pipeline, no retrieval model. The LLM reads a list of filenames, decides what's relevant, and reads those files. It stores new information by creating, appending to, or reorganizing files: merging, splitting, renaming, and synthesizing as its knowledge grows.
The server just stores and retrieves files. All intelligence (what to remember, how to organize, what's relevant) comes from the connected LLM.
After a few weeks of conversations, your AI builds up memory like this:
memory/
about-me.md 0.8 KB
family-and-friends.md 1.4 KB
food-and-cooking.md 1.1 KB
health.md 0.6 KB
travel.md 2.3 KBThen in a new conversation:
You: My sister is visiting this weekend. What should I cook?
AI: [reads family-and-friends.md — sister is Sarah, loves Italian]
[reads health.md — you're lactose intolerant]
Since Sarah loves Italian and you're dairy-free, how about
a lemon pasta with roasted garlic and cherry tomatoes? You
made something similar last month and liked it.
You: Perfect, let's do that.
AI: [writes to food-and-cooking.md]
Noted! I saved it so I'll remember for next time.Two reads, one write. The AI connected your sister's preferences with your dietary needs without being reminded of either. No setup, no schema, no retrieval config. This is a small example; in practice you'll have 20-30 files or more, and the AI reorganizes them over time, merging, splitting, and rewriting to keep things clean.
Loading reviews...