loaditout.ai
SkillsPacksTrendingLeaderboardAPI DocsBlogSubmitRequestsCompareAgentsXPrivacyDisclaimer
{}loaditout.ai
Skills & MCPPacksBlog

llm_benchmark

MCP Tool

cezbloch/llm_benchmark

Benchmarking tool for evaluating LLM inference server performance with OpenAI-compatible APIs.

Install

$ npx loaditout add cezbloch/llm_benchmark

Platform-specific configuration:

.claude/settings.json
{
  "mcpServers": {
    "llm_benchmark": {
      "command": "npx",
      "args": [
        "-y",
        "llm_benchmark"
      ]
    }
  }
}

Add the config above to .claude/settings.json under the mcpServers key.

Reviews

Loading reviews...

Quality Signals

0
Installs
Last updated187 days ago
Security: B

Safety

Risk Levelmedium
Data Access
read
Network Accessnone

Details

Sourcegithub-crawl
Last commit10/13/2025
View on GitHub→

Embed Badge

[![Loaditout](https://loaditout.ai/api/badge/cezbloch/llm_benchmark)](https://loaditout.ai/skills/cezbloch/llm_benchmark)