Configuration Guide
This guide covers all configuration options for Radium, including workspace settings, agent configuration, orchestration settings, and more.
Configuration Locationsβ
Radium uses multiple configuration locations:
- Workspace Config:
.radium/config.toml(project-specific) - User Config:
~/.radium/config.toml(user-specific) - Orchestration Config:
~/.radium/orchestration.toml(orchestration settings) - Agent Configs:
agents/**/*.toml(agent definitions)
Workspace Configurationβ
Workspace configuration is stored in .radium/config.toml:
[workspace]
name = "my-project"
description = "My project description"
[engines]
default = "gemini"
[orchestration]
enabled = true
provider = "gemini"
[memory]
enabled = true
retention_days = 30
[policy]
approval_mode = "ask"
Workspace Settingsβ
- name: Workspace name
- description: Workspace description
- engines.default: Default AI provider
- orchestration.enabled: Enable/disable orchestration
- orchestration.provider: Default orchestration provider
- memory.enabled: Enable memory system
- memory.retention_days: Memory retention period
- policy.approval_mode: Default policy approval mode
Agent Configurationβ
Agent configuration is stored in agents/**/*.toml:
[agent]
id = "my-agent"
name = "My Agent"
description = "Agent description"
prompt_path = "prompts/my-agent.md"
engine = "gemini"
model = "gemini-2.0-flash-exp"
[agent.persona]
[agent.persona.models]
primary = "gemini-2.0-flash-exp"
fallback = "gemini-1.5-pro"
premium = "gemini-1.5-pro"
[agent.persona.performance]
profile = "balanced"
estimated_tokens = 1500
Agent Settingsβ
- id: Unique agent identifier
- name: Human-readable name
- description: Agent description
- prompt_path: Path to prompt file
- engine: AI provider (gemini, claude, openai)
- model: Specific model to use
- persona: Persona system configuration
Learn more: Agent Configuration Guide
Orchestration Configurationβ
Orchestration configuration is stored in ~/.radium/orchestration.toml:
[orchestration]
enabled = true
provider = "gemini"
model = "gemini-2.0-flash-exp"
[orchestration.fallback]
enabled = true
provider = "claude"
model = "claude-3-haiku"
[orchestration.routing]
strategy = "intelligent"
max_agents = 3
Orchestration Settingsβ
- enabled: Enable/disable orchestration
- provider: Default AI provider for orchestration
- model: Model to use for orchestration
- fallback: Fallback provider configuration
- routing.strategy: Routing strategy (intelligent, round-robin, etc.)
- routing.max_agents: Maximum agents per workflow
Learn more: Orchestration Configuration
Policy Configurationβ
Policy configuration is stored in .radium/policy.toml:
approval_mode = "ask"
[[rules]]
name = "Allow safe file operations"
priority = "user"
action = "allow"
tool_pattern = "read_*"
[[rules]]
name = "Require approval for file writes"
priority = "user"
action = "ask_user"
tool_pattern = "write_*"
[[rules]]
name = "Deny dangerous shell commands"
priority = "admin"
action = "deny"
tool_pattern = "run_terminal_cmd"
arg_pattern = "rm -rf *"
Policy Settingsβ
- approval_mode: Default approval mode (yolo, autoedit, ask)
- rules: Policy rules array
- name: Rule name
- priority: Rule priority (admin, user, default)
- action: Rule action (allow, deny, ask_user)
- tool_pattern: Tool name pattern
- arg_pattern: Argument pattern (optional)
Learn more: Policy Engine
Environment Variablesβ
API Keysβ
# Google AI (Gemini)
export GOOGLE_AI_API_KEY="your-key-here"
# Anthropic (Claude)
export ANTHROPIC_API_KEY="your-key-here"
# OpenAI (GPT)
export OPENAI_API_KEY="your-key-here"
Configuration Overridesβ
# Override default engine
export RADIUM_DEFAULT_ENGINE="claude"
# Override orchestration provider
export RADIUM_ORCHESTRATION_PROVIDER="gemini"
# Enable debug logging
export RUST_LOG="debug"
Self-Hosted Model Configurationβ
Ollamaβ
export UNIVERSAL_BASE_URL="http://localhost:11434/v1"
export OLLAMA_MODEL="llama3.2"
vLLMβ
export UNIVERSAL_BASE_URL="http://localhost:8000/v1"
export VLLM_MODEL="meta-llama/Llama-2-7b-chat-hf"
LocalAIβ
export UNIVERSAL_BASE_URL="http://localhost:8080/v1"
export LOCALAI_MODEL="gpt-3.5-turbo"
Learn more: Self-Hosted Models
Extension Configurationβ
Extensions are configured via radium-extension.json:
{
"name": "my-extension",
"version": "1.0.0",
"description": "My extension",
"author": "Your Name",
"components": {
"prompts": ["prompts/**/*.md"],
"mcp_servers": ["mcp/*.json"],
"commands": ["commands/*.toml"],
"hooks": ["hooks/*.toml"]
}
}
Learn more: Extension System
Context Sources Configurationβ
Context sources are configured in workspace or agent configs:
[context_sources]
files = ["docs/**/*.md", "README.md"]
http = ["https://api.example.com/docs"]
jira = { url = "https://jira.example.com", project = "PROJ" }
braingrid = { project_id = "PROJ-14" }
Learn more: Context Sources
Memory Configurationβ
Memory settings control how agent outputs are stored:
[memory]
enabled = true
retention_days = 30
max_entries_per_plan = 100
truncate_length = 2000
Memory Settingsβ
- enabled: Enable/disable memory system
- retention_days: How long to keep memory entries
- max_entries_per_plan: Maximum entries per plan
- truncate_length: Maximum length per entry
Learn more: Memory & Context
CLI Configurationβ
CLI behavior can be configured via environment variables:
# Output format
export RADIUM_OUTPUT_FORMAT="json" # json, table, plain
# Verbose output
export RADIUM_VERBOSE="true"
# Color output
export RADIUM_COLOR="auto" # auto, always, never
Validationβ
Validate your configuration:
# Validate workspace config
rad workspace doctor
# Validate agent configs
rad agents validate
# Validate policy config
rad policy validate
# Validate extension configs
rad extension validate
Configuration Precedenceβ
Configuration is loaded in this order (later overrides earlier):
- Default values
- User config (
~/.radium/config.toml) - Workspace config (
.radium/config.toml) - Environment variables
- Command-line arguments
Best Practicesβ
Workspace-Specific Settingsβ
- Store workspace-specific settings in
.radium/config.toml - Use environment variables for sensitive data (API keys)
- Version control workspace config (exclude sensitive data)
Agent Organizationβ
- Organize agents by category in subdirectories
- Use descriptive agent IDs and names
- Document agent purposes in descriptions
Policy Configurationβ
- Start with restrictive policies
- Gradually relax as needed
- Use approval modes appropriately
- Document policy decisions
Troubleshootingβ
Configuration Not Loadingβ
# Check config file location
ls -la .radium/config.toml
# Validate config syntax
rad workspace doctor
Environment Variables Not Workingβ
# Check if variables are set
echo $GOOGLE_AI_API_KEY
# Verify shell profile
cat ~/.bashrc | grep RADIUM
Agent Not Foundβ
# Check agent discovery
rad agents list
# Verify agent config location
find . -name "*.toml" -path "*/agents/*"
Next Stepsβ
- Core Concepts - Understand Radium concepts
- Agent Configuration - Detailed agent config
- Orchestration Configuration - Orchestration setup
- Policy Engine - Policy configuration
Need help? Check the Troubleshooting Guide or open an issue.