loaditout.ai
SkillsPacksTrendingLeaderboardAPI DocsBlogSubmitRequestsCompareAgentsXPrivacyDisclaimer
{}loaditout.ai
Skills & MCPPacksBlog

scrub-mcp

MCP Tool

zombat/scrub-mcp

A 16-tool MCP server that cuts cloud LLM token usage on code quality tasks. Deterministic tools handle what they can. A local LLM (via DSPy) handles the rest. Cloud models plan and review. Nothing else.

Install

$ npx loaditout add zombat/scrub-mcp

Platform-specific configuration:

.claude/settings.json
{
  "mcpServers": {
    "scrub-mcp": {
      "command": "npx",
      "args": [
        "-y",
        "scrub-mcp"
      ]
    }
  }
}

Add the config above to .claude/settings.json under the mcpServers key.

About

S.C.R.U.B.

Source Code Review, Uplift, and Baselining

A 16-tool MCP server that cuts cloud LLM token usage on code quality tasks. Deterministic tools handle what they can. A local LLM (via DSPy) handles the rest. Cloud models plan and review. Nothing else.

Cloud LLM (plan) ──> S.C.R.U.B. MCP Server ──> Cloud LLM (review)
                          │
              ┌───────────┼───────────┐
              ▼           ▼           ▼
        Deterministic   DSPy +     Security +
        (Ruff, AST,    Local LLM   Supply Chain
         pyright)     (Qwen Coder)  (Bandit, OSV)
Why

Cloud LLMs waste tokens on boilerplate. Docstrings, type annotations, linting fixes, test stubs: these are high-volume, low-reasoning tasks that eat your context window and your budget. When the context gets long, the model gets lazy. It half-writes docstrings. It skips the 47th function. It "summarizes" instead of generating.

S.C.R.U.B. moves that work to a local pipeline where compute is virtually free, quality is consistent, and every function gets the same pass whether it's the first or the last.

Architecture

Deterministic-first. Every task hits deterministic tools before the LLM sees it. Ruff handles linting. pyright validates types. pydocstyle checks docstring style. AST analysis computes complexity. Bandit scans for vulnerabilities. If the deterministic tool says the code already passes, the LLM never fires. Zero tokens spent.

Three-tier pre-filter. Tier 1 (AST): is the docstring/annotation physically present? Tier 2 (pydocstyle/pyright): does the existing artifact pass quality checks? Tier 3 (only failures): send to the local LLM. Each tier is gated to its step. Ask for --steps lint and no pre-filter runs for docstrings.

Batched DSPy calls. Instead of one LLM call per function, S.C.R.U.B. packs 5 functions into a single prompt (configurable batch_size). A file with 30 functions goes from 60 round trips to 12.

Teacher-student optimization. U

Tags

agentic-workflowsai-agentsclaude-codecode-qualitycode-reviewdevsecopsdevtoolsdspyknowledge-distillationlintinglocal-llmmcpmcp-servermodel-context-protocolpythonrefactoringruffsbomstatic-analysis

Reviews

Loading reviews...

Quality Signals

0
Installs
Last updated20 days ago
Security: AREADME

Safety

Risk Levelmedium
Data Access
read
Network Accessnone

Details

Sourcegithub-crawl
Last commit3/28/2026
View on GitHub→

Embed Badge

[![Loaditout](https://loaditout.ai/api/badge/zombat/scrub-mcp)](https://loaditout.ai/skills/zombat/scrub-mcp)