carolinacherry/compare-mcp
Multi-model code review with ranked todos and subagent dispatch, inside Claude Code CLI
Platform-specific configuration:
{
"mcpServers": {
"compare-mcp": {
"command": "npx",
"args": [
"-y",
"compare-mcp"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
[](LICENSE) [](https://python.org) [](https://modelcontextprotocol.io) [](https://claude.ai/code) [](https://platform.openai.com) [](https://platform.moonshot.ai) [](https://platform.minimax.io)
Multi-model code review with ranked todos and subagent dispatch, inside Claude Code CLI.
Claude Code is great at code review — but it only talks to one model. Copilot CLI recently shipped multi-model debug, letting you bounce a problem off GPT, Claude, and Gemini in one shot. Claude Code can't do that natively. This MCP server adds it: bring your own API keys, fan out to any combination of models, and get back a diffed, ranked list of what they each found.
Fan out any bug or task to multiple LLMs simultaneously, diff their unique insights, optionally run a debate round where models critique each other, then dispatch parallel subagents to implement the combined best fixes — each with its own git commit.
https://github.com/user-attachments/assets/8990dabb-bc61-4625-8930-c914cffe75da
> /compare models → /compare review config.py for security issues → /compare --debate → /compare status
<p align="center"> </p>
pip install compare-mcp
claude mcp add -s user compare-mcp -- python -m compare_mcpThen grab the /compare skill and example config:
git clone https://github.com/carolinacherry/compare-mcp.git --depth 1
mkdir -p ~/.claude/skills ~/Loading reviews...