loaditout.ai
SkillsPacksTrendingLeaderboardAPI DocsBlogSubmitRequestsCompareAgentsXPrivacyDisclaimer
{}loaditout.ai
Skills & MCPPacksBlog

llm-wiki-kit

MCP Tool

iamsashank09/llm-wiki-kit

An MCP server for persistent, agent-maintained knowledge bases. Implements Karpathy's LLM Wiki pattern for long-term context and state.

Install

$ npx loaditout add iamsashank09/llm-wiki-kit

Platform-specific configuration:

.claude/settings.json
{
  "mcpServers": {
    "llm-wiki-kit": {
      "command": "npx",
      "args": [
        "-y",
        "llm-wiki-kit"
      ]
    }
  }
}

Add the config above to .claude/settings.json under the mcpServers key.

About

šŸ“š llm-wiki-kit

An MCP server that implements Karpathy's LLM Wiki pattern - persistent, LLM-maintained knowledge bases that compound over time.

Instead of RAG (rediscovering knowledge from scratch on every query), the LLM incrementally builds and maintains a structured wiki with interlinked markdown files, cross-references, summaries, and synthesis that get richer with every source you add.

Why?

The tedious part of maintaining a knowledge base isn't the reading or thinking, it's the bookkeeping. Updating cross-references, keeping summaries current, noting contradictions, maintaining consistency. LLMs are perfect for this. You curate and direct. The LLM does everything else.

Example use case: The Research Loop

Imagine you are researching a new and complex technology like LLM speculative decoding. Instead of reading 10 papers and taking manual notes, you use llm-wiki-kit to let your agent build a state map over time.

The Workflow
  1. Human: drops 3 PDFs into raw/
  2. Human: "Analyze these papers and update the KB. Pay special attention to KV cache optimizations."
  3. Agent (via MCP):
  • Calls wiki_ingest for each paper
  • Calls wiki_write_page to create concepts/speculative_decoding.md
  • Calls wiki_write_page to update synthesis/cache_strategies.md and link it to the papers
  • Calls wiki_lint to ensure the new "Draft Model" concept is cross-referenced with existing "Inference" pages
The Result

Two weeks later, you start a fresh chat session in Cursor or Claude Code. You do not need to re-upload the papers or re-explain what you learned. You ask:

> "Based on our research so far, which draft model architecture is most efficient for Llama 3?"

Your agent calls wiki_search, reads the synthesis pages it wrote earlier, and answers from accumulated evidence:

> "Based on the compiled evidence in your KB, the Eagle architecture is currently leading b

Tags

ai-agentknowledge-basellmllm-toolsmcpmcp-server

Reviews

Loading reviews...

Quality Signals

1
Stars
0
Installs
Last updated6 days ago
Security: BREADME
New

Safety

Risk Levelmedium
Data Access
read
Network Accessnone

Details

Sourcegithub-crawl
Last commit4/7/2026
View on GitHub→

Embed Badge

[![Loaditout](https://loaditout.ai/api/badge/iamsashank09/llm-wiki-kit)](https://loaditout.ai/skills/iamsashank09/llm-wiki-kit)