loaditout.ai
SkillsPacksTrendingLeaderboardAPI DocsBlogSubmitRequestsCompareAgentsXPrivacyDisclaimer
{}loaditout.ai
Skills & MCPPacksBlog

goose-orchestrator

MCP Tool

santiagoaraoz2001-sketch/goose-orchestrator

Multi-model orchestrator-worker MCP extension for Goose. VRAM-aware model hot-swapping with specialized worker roles.

Install

$ npx loaditout add santiagoaraoz2001-sketch/goose-orchestrator

Platform-specific configuration:

.claude/settings.json
{
  "mcpServers": {
    "goose-orchestrator": {
      "command": "npx",
      "args": [
        "-y",
        "goose-orchestrator"
      ]
    }
  }
}

Add the config above to .claude/settings.json under the mcpServers key.

About

goose-orchestrator

A multi-model orchestrator-worker MCP extension for Goose. Routes prompts to specialized worker models with VRAM-aware hot-swapping — only the orchestrator and active worker are loaded at any time.

Features
  • Automatic task routing — orchestrator LLM decomposes prompts into a dependency graph of sub-tasks, each assigned to a specialized worker role
  • VRAM-aware model pool — LRU eviction ensures local models stay within a configurable memory budget; API models bypass the pool
  • Speculative preloading — the router predicts the next worker needed and begins loading it during the current step
  • Parallel execution — independent steps run concurrently up to a configurable worker limit
  • Per-worker temperature — each role has its own sampling temperature (e.g. 0.1 for math, 0.9 for creative)
  • Goose-native configuration — all settings managed through MCP tools in Goose's chat UI
  • Multi-provider — supports Ollama (local), OpenAI-compatible APIs, and Anthropic
Default Worker Roles

| Role | Default Model | Temperature | Optimized For | |------|--------------|-------------|---------------| | deep_research | qwen3:32b | 0.4 | Multi-hop web research, paper analysis | | local_rag | qwen3:8b | 0.2 | Local document retrieval, code search | | code_gen | qwen2.5-coder:32b | 0.3 | Code generation, refactoring, debugging | | summarizer | qwen3:8b | 0.5 | Condensing outputs, report writing | | math_reasoning | qwen3:32b | 0.1 | Formal proofs, calculations, logic | | creative | llama3:70b | 0.9 | Creative writing, brainstorming |

All roles are fully customizable — change models, add new roles, or remove existing ones.

Setup
Prerequisites
  • Python 3.12+
  • uv package manager
  • Ollama (for local models) and/or API keys for OpenAI/Anthropic
Installation

Clone the repo and

Tags

ai-agentsgoosemcpmcp-servermulti-modelollamapython

Reviews

Loading reviews...

Quality Signals

0
Installs
Last updated12 days ago
Security: AREADME
New

Safety

Risk Levelmedium
Data Access
read
Network Accessnone

Details

Sourcegithub-crawl
Last commit4/1/2026
View on GitHub→

Embed Badge

[![Loaditout](https://loaditout.ai/api/badge/santiagoaraoz2001-sketch/goose-orchestrator)](https://loaditout.ai/skills/santiagoaraoz2001-sketch/goose-orchestrator)