Orchestration User Guide
Introductionβ
Radium's orchestration system allows you to interact naturally with your AI assistants without manually selecting which agent to use for each task. The orchestrator automatically analyzes your input, routes tasks to appropriate specialist agents, coordinates multi-agent workflows, and synthesizes results - all while remaining completely model-agnostic.
What is Orchestration?β
Orchestration is Radium's intelligent task routing system that:
- Automatically selects the right agent(s) for each task
- Coordinates workflows involving multiple agents
- Synthesizes results from agent interactions
- Works across providers (Gemini, Claude, OpenAI, prompt-based fallback)
Instead of typing /chat senior-developer and then your request, you can simply type your request naturally, and the orchestrator will route it to the appropriate agent automatically.
Getting Startedβ
First-Time Setupβ
Orchestration is enabled by default when you start Radium TUI. On first run, Radium will create a default configuration file at ~/.radium/orchestration.toml.
If orchestration isn't working:
-
Check API keys: Ensure you have at least one API key set:
export GEMINI_API_KEY="your-key-here"
# or
export ANTHROPIC_API_KEY="your-key-here"
# or
export OPENAI_API_KEY="your-key-here" -
Verify orchestration status:
/orchestrator -
Enable if needed:
/orchestrator toggle
Quick Start Exampleβ
You: I need to refactor the authentication module
π€ Analyzing...
π Invoking: senior-developer
β
Complete (2.3s)
Assistant: I've analyzed your authentication module and identified several areas for improvement...
Natural Conversationβ
How It Worksβ
When orchestration is enabled, you can type naturally in the TUI without any command prefixes:
- β "I need help with authentication" β Routes to appropriate agent
- β "Refactor the user service module" β Routes to developer agent
- β "Create tests for the API endpoints" β Routes to testing agent
- β "Document the database schema" β Routes to documentation agent
The orchestrator automatically:
- Analyzes your intent
- Selects the best agent(s) for the task
- Executes the task
- Returns synthesized results
Commands vs. Orchestrationβ
Commands (starting with /) always bypass orchestration:
/chat agent-name- Direct chat with specific agent/agents- List available agents/orchestrator- Orchestration configuration
Natural input (no /) routes through orchestration if enabled.
Commandsβ
/orchestrator - Show Statusβ
Display current orchestration configuration:
/orchestrator
Output shows:
- Enabled/disabled state
- Current provider and model
- Service initialization status
/orchestrator toggle - Enable/Disableβ
Toggle orchestration on or off:
/orchestrator toggle
When disabled, natural language input will fall back to regular chat mode. Use this if you prefer explicit agent selection.
/orchestrator switch <provider> - Change Providerβ
Switch between AI providers:
/orchestrator switch gemini
/orchestrator switch claude
/orchestrator switch openai
/orchestrator switch prompt_based
Available providers:
- gemini - Google Gemini models (default)
- claude - Anthropic Claude models
- openai - OpenAI GPT models
- prompt_based - Prompt-based fallback (no API key required)
Provider changes are automatically saved to your configuration file.
/orchestrator config - Show Full Configurationβ
Display complete configuration details:
/orchestrator config
Shows all provider settings, model configurations, temperature, iteration limits, and fallback settings.
/orchestrator refresh - Reload Agent Registryβ
Reload all agent tool definitions:
/orchestrator refresh
Use this after adding, modifying, or removing agent configuration files. The orchestrator will discover and use the updated agents.
Configurationβ
Orchestration configuration supports both workspace-based and home directory settings. See the Configuration Guide for complete details.
Configuration File Locationsβ
Orchestration settings are stored in (in priority order):
- Workspace config:
.radium/config/orchestration.toml(preferred) - Home directory:
~/.radium/orchestration.toml(fallback)
Quick Configuration Referenceβ
[orchestration]
enabled = true
default_provider = "gemini"
[orchestration.gemini]
model = "gemini-2.0-flash-thinking-exp"
temperature = 0.7
max_tool_iterations = 5
[orchestration.claude]
model = "claude-3-5-sonnet-20241022"
temperature = 0.7
max_tool_iterations = 5
max_tokens = 4096
[orchestration.openai]
model = "gpt-4-turbo-preview"
temperature = 0.7
max_tool_iterations = 5
[orchestration.prompt_based]
temperature = 0.7
max_tool_iterations = 5
[orchestration.fallback]
enabled = true
chain = ["gemini", "claude", "openai", "prompt_based"]
max_retries = 2
Configuration Optionsβ
Global Settings:
enabled- Enable/disable orchestration (boolean)default_provider- Primary provider to use (gemini,claude,openai,prompt_based)
Provider Settings:
model- Model identifier (provider-specific)temperature- Generation temperature (0.0-1.0)max_tool_iterations- Maximum tool execution iterations (default: 5)max_tokens- Maximum output tokens (Claude only)
Fallback Settings:
enabled- Enable automatic fallback (boolean)chain- Fallback provider order (array)max_retries- Maximum retries per provider (default: 2)
Providersβ
Gemini (Google)β
Best for: Fast responses, general tasks, cost-effective
Requirements: GEMINI_API_KEY environment variable
Default Model: gemini-2.0-flash-thinking-exp
Configuration:
[orchestration.gemini]
model = "gemini-2.0-flash-thinking-exp"
temperature = 0.7
max_tool_iterations = 5
Claude (Anthropic)β
Best for: Complex reasoning, code analysis, high-quality outputs
Requirements: ANTHROPIC_API_KEY environment variable
Default Model: claude-3-5-sonnet-20241022
Configuration:
[orchestration.claude]
model = "claude-3-5-sonnet-20241022"
temperature = 0.7
max_tool_iterations = 5
max_tokens = 4096
OpenAIβ
Best for: GPT-4 capabilities, general purpose
Requirements: OPENAI_API_KEY environment variable
Default Model: gpt-4-turbo-preview
Configuration:
[orchestration.openai]
model = "gpt-4-turbo-preview"
temperature = 0.7
max_tool_iterations = 5
Prompt-Based Fallbackβ
Best for: Offline use, testing, when API keys unavailable
Requirements: None (uses local model abstraction)
Configuration:
[orchestration.prompt_based]
temperature = 0.7
max_tool_iterations = 5
Troubleshootingβ
For detailed troubleshooting, see the Troubleshooting Guide.
Quick Troubleshootingβ
Symptoms: Natural input doesn't trigger orchestration
Solutions:
- Check orchestration is enabled:
/orchestrator - Verify API keys are set:
echo $GEMINI_API_KEY - Check service initialization in status output
- Try toggling:
/orchestrator toggle(off then on) - Check logs for initialization errors
Wrong Agent Selectedβ
Symptoms: Orchestrator routes to incorrect agent
Solutions:
- Be more specific in your request
- Include context about what you need
- Try different phrasings
- Use explicit
/chat agent-nameif needed - Check available agents:
/agents
Provider Errorsβ
Symptoms: "Orchestration error" or API failures
Solutions:
- Verify API key is valid and set
- Check API service status
- Switch to different provider:
/orchestrator switch claude - Enable fallback in config (should be enabled by default)
- Try prompt-based fallback:
/orchestrator switch prompt_based
Timeout Errorsβ
Symptoms: "Finished with: max_iterations" or timeouts
Solutions:
- Break task into smaller requests
- Increase
max_tool_iterationsin config - Simplify your request
- Check network connectivity
Slow Performanceβ
Symptoms: Orchestration takes too long
Solutions:
- Switch to faster provider (Gemini Flash)
- Reduce
max_tool_iterationsif excessive - Check network latency
- Simplify requests
- Check if multiple agents are being invoked unnecessarily
Tips for Best Resultsβ
-
Be Specific: Clear, specific requests route better
- β "Refactor the authentication service to use JWT tokens"
- β "Fix auth"
-
Provide Context: Include relevant information
- β "Update the User model to add email verification field"
- β "Add email to user"
-
One Task at a Time: Break complex workflows into steps
- β "Design the database schema for user profiles"
- β "Build the entire user profile system with auth, database, API, and frontend"
-
Use Commands for Explicit Control: When you know exactly which agent you want
/chat senior-developerfor development tasks/chat testerfor testing tasks
Examplesβ
See orchestration-workflows.md for detailed workflow examples.
Advanced Usageβ
Multi-Agent Workflowsβ
The orchestrator can coordinate multiple agents for complex tasks:
You: Create a new feature for task templates
π€ Analyzing...
1. π product-manager - Define feature requirements
2. ποΈ architect - Design implementation approach
3. π» senior-developer - Implement feature
4. π§ͺ tester - Create test suite
Executing 4 agents...
β
Complete (15.2s)
Custom Agent Routingβ
Agents are automatically discovered from:
./agents/- Project-local agents~/.radium/agents/- User agents
Agent descriptions and capabilities are used by the orchestrator to route tasks. See agent-creation-guide.md for creating custom agents.
Performance Optimizationβ
- Provider Selection: Choose faster providers (Gemini Flash) for quick tasks
- Iteration Limits: Adjust
max_tool_iterationsto prevent long-running workflows - Fallback Chain: Configure fallback order for reliability
Related Documentationβ
- Orchestration Configuration Guide - Complete configuration reference
- Orchestration Troubleshooting Guide - Common issues and solutions
- Orchestration Testing Guide - Manual testing procedures
- Agent Configuration - Agent setup
- Orchestration Workflows - Example workflows
- Agent Creation Guide - Creating custom agents