AiAgentKarl/llm-benchmark-mcp-server
MCP Server for LLM comparison, benchmarks, and pricing — find the best model for any task
Platform-specific configuration:
{
"mcpServers": {
"llm-benchmark-mcp-server": {
"command": "npx",
"args": [
"-y",
"llm-benchmark-mcp-server"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
MCP server that gives AI agents access to LLM benchmark data, pricing comparisons, and model recommendations.
GPT-4o, GPT-4o-mini, GPT-4 Turbo, o1, o3-mini, Claude 3.5 Sonnet, Claude 3.5 Haiku, Claude 3 Opus, Gemini 2.0 Flash, Gemini 2.0 Pro, Gemini 1.5 Pro, Llama 3.1 (8B/70B/405B), Llama 3.3 70B, Mistral Large, Mistral Small, Mixtral 8x22B, DeepSeek V3, DeepSeek R1, Qwen 2.5 72B
pip install llm-benchmark-mcp-serverAdd to your claude_desktop_config.json:
{
"mcpServers": {
"llm-benchmark": {
"command": "benchmark-server"
}
}
}Or via uvx (no install needed):
{
"mcpServers": {
"llm-benchmark": {
"command": "uvx",
"args": ["llm-benchmark-mcp-server"]
}
}
}---
| Category | Servers | |----------|---------| | 🔗
Loading reviews...