roberteisenberg/mcp-knowledge-graph
Clinical intelligence tool built across 6 phases — demonstrates reducing LLM hallucinations and cost by moving work into infrastructure
Platform-specific configuration:
{
"mcpServers": {
"mcp-knowledge-graph": {
"command": "npx",
"args": [
"-y",
"mcp-knowledge-graph"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
A progressive tutorial that builds a clinical intelligence tool using MCP (Model Context Protocol), demonstrating how to reduce LLM hallucinations and cost by moving work from the LLM into infrastructure.
Two problems define LLM application development:
Every phase of this tutorial adds infrastructure that takes work away from the LLM. The LLM doesn't get dumber — it gets a narrower, more appropriate job.
| Phase | What it adds | Hallucination reduction | Cost reduction | |---|---|---|---| | Phase 0 | Baseline — all tools hardcoded | — | — | | Phase 1 | MCP server, resources, tool discovery | Same tools, same behavior — this is an architecture change, not a capability change. Resources give minor upfront context. | Same | | Phase 2 | Knowledge graph, graph traversal, MCP prompts | find_path and suggest_join give deterministic answers — no speculative SQL | Graph tools replace multi-step LLM reasoning | | Phase 3 | Deterministic workflows | Python drives the tool calls — zero hallucination in orchestration | 30 MCP calls + 9 Claude calls vs. fully interactive | | Phase 4 | Semantic search | Discovery grounded in actual data — no hallucinated entities | First call returns ranked results vs. trial-and-error | | Phase 5 | Tracing, cost tracking, eval | Measure hallucinations instead of eyeballing. Prove Phase 3 is cheaper. | Know what every query costs. Set budget limits. |
A clinical intelligence tool that connects a private clinic database (patients, prescriptions, drug interactions) to public FDA data (drug labels, adverse events) through a knowledge graph. The same six test queries run against each phas
Loading reviews...