cerebrixos-org/tuning-engines-cli
CLI & MCP server for Tuning Engines — fine-tune LLMs on code repositories
[](https://glama.ai/mcp/servers/cerebrixos-org/tuning-engines-cli)
[](https://www.npmjs.com/package/tuningengines-cli) [](https://registry.modelcontextprotocol.io) [](https://opensource.org/licenses/MIT)
Own your sovereign AI model. Domain-specific fine-tuning of open-source LLMs and SLMs with total control and zero infrastructure hassle.
[Tuning Engines](https://tuningengines.com) provides specialized tuning agents to tailor top open models to your needs — fast, predictable, fully delivered. Fine-tune Qwen, Llama, DeepSeek, Mistral, Gemma, Phi, StarCoder, and CodeLlama models from 1B to 72B parameters on your data via CLI or any MCP-compatible AI assistant. LoRA, QLoRA, and full fine-tuning supported. GPU provisioning, training orchestration, and model delivery fully managed.
Tuning Engines uses specialized agents that control how your data is analyzed and converted into training data. Each agent produces a different kind of domain-specific fine-tuned model optimized for its use case. Current agents focus on code, with more coming for customer support, data extraction, security review, ops, and other domains.
code_repo) — Code Autocomplete AgentCody fine-tunes on your GitHub repo using QLoRA (4-bit quantized LoRA) via the Axolotl framework (HuggingFace Transformers + PEFT). It learns your codebase's patterns, naming conventions, and project structure to produce a fast, lightweight adapter optimized for real-time completions.
Best for: code autocomplete, inline suggestions, tab-complete, code style matching, pattern completion.
te jobs create --agent coLoading reviews...