loaditout.ai
SkillsPacksTrendingLeaderboardAPI DocsBlogSubmitRequestsCompareAgentsXPrivacyDisclaimer
{}loaditout.ai
Skills & MCPPacksBlog

chaindocs_MCP_example

MCP Tool

thomassuedbroecker/chaindocs_MCP_example

LangChain-based MCP server example that indexes local Markdown and text files, splits them into searchable chunks, and exposes document search and retrieval tools over Streamable HTTP. The repo includes open-source-only dependencies, architecture documentation, automated tests, and verified coverage.

Install

$ npx loaditout add thomassuedbroecker/chaindocs_MCP_example

Platform-specific configuration:

.claude/settings.json
{
  "mcpServers": {
    "chaindocs_MCP_example": {
      "command": "npx",
      "args": [
        "-y",
        "chaindocs_MCP_example"
      ]
    }
  }
}

Add the config above to .claude/settings.json under the mcpServers key.

About

LangChain Documents MCP Server

Example MCP server that loads local .md and .txt files with LangChain, splits them into chunks, and exposes search and retrieval tools over MCP. The default local setup uses the streamable-http transport at http://127.0.0.1:9015/mcp.

Architecture Overview
flowchart LR
    A[MCP Client] -->|Configured MCP transport| B[FastMCP Server]
    C[.env and environment variables] --> D[Settings]
    D --> B
    B --> E[Registered MCP tools]
    E --> F[DocumentStore]
    F --> G[LangChain TextLoader]
    G --> H[Local .md and .txt files]
    F --> I[RecursiveCharacterTextSplitter]
    I --> J[In-memory chunk index]
    J --> E
    E --> K[Search, read, and chunk responses]

This example follows a simple request and indexing pipeline:

  • Startup begins in scripts/run_local.sh, which validates configuration and launches python -m langchain_documents_mcp_server.main.
  • main.py delegates to server.py, where create_server() loads typed settings from .env, configures logging, creates the DocumentStore, and registers the MCP tools.
  • DocumentStore.reload() scans DOCUMENTS_PATH, loads .md and .txt files with LangChain TextLoader, splits them with RecursiveCharacterTextSplitter, and builds an in-memory index of document metadata and chunks.
  • search_documents() performs lightweight keyword scoring against indexed chunks and returns excerpts, while read_document() and get_document_chunk() return the original file or a single indexed chunk.
  • Tool responses are wrapped in a consistent { "ok": true, "result": ... } or { "ok": false, "error": ... } payload so clients receive structured success and failure responses instead of raw tracebacks.

Main implementation modules:

  • config.py: typed settings, transport normalization, and validation.
  • document_store.py: file discovery, LangChain loading, chunking, in-memory indexing, and search ranking.
  • server.py: FastMCP server creation, tool

Tags

chainchain-docsdocsexample-repolangchainmarkdown-languagemcp-server

Reviews

Loading reviews...

Quality Signals

1
Stars
0
Installs
Last updated17 days ago
Security: AREADME

Safety

Risk Levelmedium
Data Access
read
Network Accessnone

Details

Sourcegithub-crawl
Last commit3/31/2026
View on GitHub→

Embed Badge

[![Loaditout](https://loaditout.ai/api/badge/thomassuedbroecker/chaindocs_MCP_example)](https://loaditout.ai/skills/thomassuedbroecker/chaindocs_MCP_example)