zombat/scrub-mcp
A 16-tool MCP server that cuts cloud LLM token usage on code quality tasks. Deterministic tools handle what they can. A local LLM (via DSPy) handles the rest. Cloud models plan and review. Nothing else.
Platform-specific configuration:
{
"mcpServers": {
"scrub-mcp": {
"command": "npx",
"args": [
"-y",
"scrub-mcp"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
Source Code Review, Uplift, and Baselining
A 16-tool MCP server that cuts cloud LLM token usage on code quality tasks. Deterministic tools handle what they can. A local LLM (via DSPy) handles the rest. Cloud models plan and review. Nothing else.
Cloud LLM (plan) ──> S.C.R.U.B. MCP Server ──> Cloud LLM (review)
│
┌───────────┼───────────┐
▼ ▼ ▼
Deterministic DSPy + Security +
(Ruff, AST, Local LLM Supply Chain
pyright) (Qwen Coder) (Bandit, OSV)Cloud LLMs waste tokens on boilerplate. Docstrings, type annotations, linting fixes, test stubs: these are high-volume, low-reasoning tasks that eat your context window and your budget. When the context gets long, the model gets lazy. It half-writes docstrings. It skips the 47th function. It "summarizes" instead of generating.
S.C.R.U.B. moves that work to a local pipeline where compute is virtually free, quality is consistent, and every function gets the same pass whether it's the first or the last.
Deterministic-first. Every task hits deterministic tools before the LLM sees it. Ruff handles linting. pyright validates types. pydocstyle checks docstring style. AST analysis computes complexity. Bandit scans for vulnerabilities. If the deterministic tool says the code already passes, the LLM never fires. Zero tokens spent.
Three-tier pre-filter. Tier 1 (AST): is the docstring/annotation physically present? Tier 2 (pydocstyle/pyright): does the existing artifact pass quality checks? Tier 3 (only failures): send to the local LLM. Each tier is gated to its step. Ask for --steps lint and no pre-filter runs for docstrings.
Batched DSPy calls. Instead of one LLM call per function, S.C.R.U.B. packs 5 functions into a single prompt (configurable batch_size). A file with 30 functions goes from 60 round trips to 12.
Teacher-student optimization. U
Loading reviews...