thomassuedbroecker/chaindocs_MCP_example
LangChain-based MCP server example that indexes local Markdown and text files, splits them into searchable chunks, and exposes document search and retrieval tools over Streamable HTTP. The repo includes open-source-only dependencies, architecture documentation, automated tests, and verified coverage.
Platform-specific configuration:
{
"mcpServers": {
"chaindocs_MCP_example": {
"command": "npx",
"args": [
"-y",
"chaindocs_MCP_example"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
Example MCP server that loads local .md and .txt files with LangChain, splits them into chunks, and exposes search and retrieval tools over MCP. The default local setup uses the streamable-http transport at http://127.0.0.1:9015/mcp.
flowchart LR
A[MCP Client] -->|Configured MCP transport| B[FastMCP Server]
C[.env and environment variables] --> D[Settings]
D --> B
B --> E[Registered MCP tools]
E --> F[DocumentStore]
F --> G[LangChain TextLoader]
G --> H[Local .md and .txt files]
F --> I[RecursiveCharacterTextSplitter]
I --> J[In-memory chunk index]
J --> E
E --> K[Search, read, and chunk responses]This example follows a simple request and indexing pipeline:
scripts/run_local.sh, which validates configuration and launches python -m langchain_documents_mcp_server.main.main.py delegates to server.py, where create_server() loads typed settings from .env, configures logging, creates the DocumentStore, and registers the MCP tools.DocumentStore.reload() scans DOCUMENTS_PATH, loads .md and .txt files with LangChain TextLoader, splits them with RecursiveCharacterTextSplitter, and builds an in-memory index of document metadata and chunks.search_documents() performs lightweight keyword scoring against indexed chunks and returns excerpts, while read_document() and get_document_chunk() return the original file or a single indexed chunk.{ "ok": true, "result": ... } or { "ok": false, "error": ... } payload so clients receive structured success and failure responses instead of raw tracebacks.Main implementation modules:
config.py: typed settings, transport normalization, and validation.document_store.py: file discovery, LangChain loading, chunking, in-memory indexing, and search ranking.server.py: FastMCP server creation, toolLoading reviews...