Agent Creation Guide
This comprehensive guide will help you create custom AI agents for Radium. Agents are specialized AI assistants that perform specific tasks within your workflows. This guide covers everything from basic configuration to advanced patterns and best practices.
Table of Contentsβ
- Introduction
- Agent Configuration Format
- Prompt Template Structure
- Agent Discovery and Organization
- Creating Your First Agent
- Common Agent Patterns
- Advanced Configuration
- Best Practices
- Testing and Validation
- Troubleshooting
Introductionβ
What is an Agent?β
An agent in Radium is a specialized AI assistant configured to perform specific tasks. Each agent consists of:
- Configuration file (
.toml): Defines the agent's identity, capabilities, and behavior - Prompt template (
.md): Contains the instructions that guide the agent's behavior
Agents are automatically discovered from configured directories and can be used in workflows, executed directly via CLI, or integrated into your development process.
Why Create Custom Agents?β
- Specialization: Create agents tailored to your specific domain or use case
- Consistency: Ensure consistent behavior across projects and teams
- Reusability: Share agents across multiple projects
- Optimization: Configure agents with optimal models and settings for their tasks
Agent Configuration Formatβ
Basic Structureβ
Agent configurations are written in TOML format and stored in .toml files. The basic structure is:
[agent]
id = "my-agent"
name = "My Agent"
description = "A description of what this agent does"
prompt_path = "prompts/agents/my-category/my-agent.md"
Required Fieldsβ
id (string)β
A unique identifier for the agent. Use kebab-case (lowercase with hyphens).
Examples:
arch-agentβcode-review-agentβmyAgentβ (use kebab-case)agent 1β (no spaces)
Best Practices:
- Use descriptive names that indicate the agent's purpose
- Keep IDs concise but clear
- Use consistent naming conventions across your agents
name (string)β
A human-readable name for the agent displayed in CLI and UI.
Examples:
"Architecture Agent""Code Review Agent""Documentation Generator"
description (string)β
A brief description of what the agent does. This helps users discover and understand the agent's purpose.
Examples:
"Defines system architecture and technical design decisions""Reviews code for quality, security, and best practices""Generates comprehensive documentation and API references"
prompt_path (PathBuf)β
The path to the markdown file containing the agent's prompt template. Can be absolute or relative to the workspace root.
Examples:
"prompts/agents/core/arch-agent.md"(relative)"/absolute/path/to/prompt.md"(absolute)
Path Resolution:
- Relative paths are resolved from the workspace root
- If the prompt file is in the same directory structure as the config, use relative paths
- The path must point to an existing markdown file
Optional Fieldsβ
engine (string)β
The default AI engine to use for this agent. If not specified, the engine must be provided at runtime.
Supported Engines:
"gemini"- Google Gemini models"openai"- OpenAI models (GPT-4, GPT-3.5)"claude"- Anthropic Claude models"codex"- OpenAI Codex models
Example:
engine = "gemini"
model (string)β
The specific model to use. Must be compatible with the specified engine.
Examples:
"gemini-2.0-flash-exp"(Gemini)"gpt-4"(OpenAI)"claude-3-opus-20240229"(Anthropic)
Example:
model = "gemini-2.0-flash-exp"
reasoning_effort (string)β
The default reasoning effort level. Controls how much computational effort the model should use for reasoning.
Valid Values:
"low"- Minimal reasoning, faster responses"medium"- Balanced reasoning (default)"high"- Maximum reasoning, slower but more thorough
Example:
reasoning_effort = "high"
When to Use:
- Low: Simple tasks, code generation, quick responses
- Medium: General-purpose tasks, balanced performance
- High: Complex reasoning, architecture decisions, critical reviews
mirror_path (PathBuf)β
Optional mirror path for RAD-agents. Used when agents are mirrored from another location.
Example:
mirror_path = "/path/to/original/agent"
Advanced Configurationβ
Loop Behaviorβ
Configure an agent to request looping back to previous steps in a workflow.
[agent.loop_behavior]
steps = 2 # Number of steps to go back
max_iterations = 5 # Maximum iterations before stopping (optional)
skip = ["step-1"] # List of step IDs to skip during loop (optional)
Fields:
steps(required): Number of steps to go back when looping (must be > 0)max_iterations(optional): Maximum number of loop iterations (must be > 0 if present)skip(optional): List of step IDs to skip during loop execution
Use Cases:
- Iterative refinement agents
- Agents that need to fix issues in previous steps
- Validation agents that may require multiple passes
Example:
[agent]
id = "refinement-agent"
name = "Refinement Agent"
description = "Refines output based on feedback"
prompt_path = "prompts/agents/core/refinement-agent.md"
[agent.loop_behavior]
steps = 1
max_iterations = 3
Trigger Behaviorβ
Configure an agent to dynamically trigger other agents during workflow execution.
[agent.trigger_behavior]
trigger_agent_id = "fallback-agent"
Fields:
trigger_agent_id(optional): Default agent ID to trigger (can be overridden in workflow)
Use Cases:
- Coordinator agents that delegate to specialized agents
- Fallback agents for error handling
- Multi-agent workflows
Example:
[agent]
id = "coordinator"
name = "Coordinator Agent"
description = "Coordinates multiple agents"
prompt_path = "prompts/agents/core/coordinator.md"
[agent.trigger_behavior]
trigger_agent_id = "worker-agent"
Capabilitiesβ
Configure agent capabilities for dynamic model selection and concurrency control.
[agent.capabilities]
model_class = "fast" # Options: "fast", "balanced", "reasoning"
cost_tier = "low" # Options: "low", "medium", "high"
max_concurrent_tasks = 10 # Maximum concurrent tasks (default: 5)
Fields:
model_class(required): Model category -"fast"(speed-optimized),"balanced"(balanced speed/quality), or"reasoning"(deep reasoning)cost_tier(required): Cost tier -"low","medium", or"high"max_concurrent_tasks(optional): Maximum number of concurrent tasks (default: 5)
Use Cases:
- Fast agents for quick iterations (model_class: "fast", cost_tier: "low")
- Balanced agents for general tasks (model_class: "balanced", cost_tier: "medium")
- Reasoning agents for complex problems (model_class: "reasoning", cost_tier: "high")
Example:
[agent]
id = "fast-code-gen"
name = "Fast Code Generator"
description = "Generates code quickly"
prompt_path = "prompts/agents/core/fast-code-gen.md"
[agent.capabilities]
model_class = "fast"
cost_tier = "low"
max_concurrent_tasks = 20
Persona Configurationβ
Configure persona metadata for intelligent model selection, cost optimization, and automatic fallback chains.
[agent.persona]
[agent.persona.models]
primary = "gemini-2.0-flash-exp"
fallback = "gemini-2.0-flash-thinking" # Optional
premium = "gemini-1.5-pro" # Optional
[agent.persona.performance]
profile = "balanced" # Options: "speed", "balanced", "thinking", "expert"
estimated_tokens = 1500 # Optional
Fields:
primary(required): Primary recommended modelfallback(optional): Model to use if primary is unavailablepremium(optional): Premium model for critical tasksprofile(optional): Performance profile -"speed","balanced","thinking", or"expert"(default:"balanced")estimated_tokens(optional): Estimated token usage per execution
Model Format: Models can be specified as:
- Simple:
"gemini-2.0-flash-exp"(uses agent's engine) - Full:
"gemini:gemini-2.0-flash-exp"(explicit engine)
Performance Profiles:
speed: Fast responses, lower cost - best for simple tasksbalanced: Balanced speed and quality - best for general tasksthinking: Deep reasoning - best for complex problemsexpert: Expert-level reasoning, highest cost - best for critical tasks
Use Cases:
- Automatic model selection based on task requirements
- Cost estimation and budget tracking
- Automatic fallback when models are unavailable
- Matching model capabilities to task complexity
Example:
[agent]
id = "arch-agent"
name = "Architecture Agent"
description = "Defines system architecture"
prompt_path = "prompts/agents/core/arch-agent.md"
[agent.persona]
[agent.persona.models]
primary = "gemini-2.0-flash-thinking"
fallback = "gemini-2.0-flash-exp"
premium = "gemini-1.5-pro"
[agent.persona.performance]
profile = "thinking"
estimated_tokens = 2000
Quick Start:
When creating a new agent, use the --with-persona flag to generate a persona template:
rad agents create my-agent --with-persona
For more details, see the Persona System User Guide.
Sandbox Configurationβ
Configure sandboxing for safe command execution in isolated environments.
[agent.sandbox]
type = "docker" # Options: "docker", "podman", "seatbelt", "none"
image = "rust:latest" # Docker/Podman image (required for docker/podman)
profile = "restricted" # Sandbox profile (optional)
network_mode = "isolated" # Network mode (optional)
Fields:
type(required): Sandbox type -"docker","podman","seatbelt"(macOS only), or"none"image(optional): Container image for Docker/Podman sandboxesprofile(optional): Sandbox profile (e.g., "restricted", "permissive")network_mode(optional): Network isolation mode -"isolated","bridged", or"host"
Use Cases:
- Agents that execute untrusted code
- Agents that need to run in specific environments
- Agents that require network isolation
Example:
[agent]
id = "safe-exec"
name = "Safe Execution Agent"
description = "Executes commands in sandbox"
prompt_path = "prompts/agents/core/safe-exec.md"
[agent.sandbox]
type = "docker"
image = "ubuntu:latest"
profile = "restricted"
network_mode = "isolated"
Complete Configuration Exampleβ
[agent]
id = "arch-agent"
name = "Architecture Agent"
description = "Defines system architecture and technical design decisions"
prompt_path = "prompts/agents/core/arch-agent.md"
engine = "gemini"
model = "gemini-2.0-flash-exp"
reasoning_effort = "high"
[agent.loop_behavior]
steps = 1
max_iterations = 3
[agent.capabilities]
model_class = "reasoning"
cost_tier = "high"
max_concurrent_tasks = 3
Prompt Template Structureβ
File Formatβ
Prompt templates are markdown files (.md) that contain instructions for the agent. The structure is flexible, but following a consistent pattern improves maintainability.
Recommended Structureβ
# Agent Name
Brief description of what the agent does.
## Role
Define the agent's role and primary responsibilities here.
## Capabilities
- List the agent's core capabilities
- Include what tasks it can perform
- Specify any constraints or limitations
## Input
Describe what inputs this agent expects:
- Context from previous steps
- Required parameters
- Optional configuration
## Output
Describe what this agent produces:
- Expected output format
- Key deliverables
- Success criteria
## Instructions
Provide step-by-step instructions for the agent:
1. First step - explain what to do
2. Second step - detail the process
3. Third step - clarify expectations
4. Continue as needed...
## Examples
### Example 1: [Scenario Name]
**Input:**
Provide sample input
**Expected Output:**
Show expected result
### Example 2: [Another Scenario]
**Input:**
Different scenario input
**Expected Output:**
Corresponding output
## Notes
- Add any important notes
- Include edge cases to consider
- Document best practices
Key Sections Explainedβ
Roleβ
Clearly define what the agent is and what it's responsible for. This helps the AI understand its identity and purpose.
Example:
## Role
You are an expert software architect responsible for designing robust, scalable, and maintainable system architectures. You analyze requirements, evaluate trade-offs, and make informed technical decisions that align with project goals and constraints.
Capabilitiesβ
List what the agent can do. Be specific about capabilities and limitations.
Example:
## Capabilities
- Design high-level system architecture and component interactions
- Select appropriate technologies, frameworks, and design patterns
- Define data models, APIs, and integration strategies
- Evaluate architectural trade-offs and document decisions
- Create architecture diagrams and technical specifications
Instructionsβ
Provide clear, step-by-step instructions. Use numbered lists for sequential processes.
Example:
## Instructions
1. **Analyze Requirements**
- Review functional and non-functional requirements
- Identify critical user flows and data flows
- Clarify ambiguous requirements and constraints
2. **Design System Components**
- Break system into logical components and services
- Define component responsibilities and boundaries
- Map component interactions and dependencies
Examplesβ
Include concrete examples showing expected inputs and outputs. Examples help the AI understand the desired format and quality.
Example:
## Examples
### Example 1: E-Commerce Platform
**Input:**
Requirements:
- Multi-tenant SaaS platform for online stores
- Support 10,000+ concurrent users
- Real-time inventory management
**Expected Output:**
```markdown
# E-Commerce Platform Architecture
## System Overview
- Microservices architecture with API Gateway
- Event-driven communication for inventory updates
Agent Discovery and Organizationβ
Directory Structureβ
Agents are organized in a directory structure that reflects their categories:
agents/
βββ core/ # Core agents (arch, plan, code, review, doc)
βββ design/ # Design agents
βββ testing/ # Testing agents
βββ deployment/ # Deployment agents
βββ custom/ # User-defined agents
Search Path Hierarchyβ
Agents are discovered from multiple directories in this order (precedence from highest to lowest):
- Project-local agents:
./agents/ - User agents:
~/.radium/agents/ - Workspace agents:
$RADIUM_WORKSPACE/agents/(ifRADIUM_WORKSPACEis set) - Project-level extension agents:
./.radium/extensions/*/agents/ - User-level extension agents:
~/.radium/extensions/*/agents/
Precedence Rules:
- Agents from higher-precedence directories override agents with the same ID from lower-precedence directories
- This allows project-specific agents to override user-level or extension agents
Category Derivationβ
The agent's category is automatically derived from the directory structure:
agents/core/arch-agent.tomlβ category:"core"agents/custom/my-agent.tomlβ category:"custom"agents/rad-agents/design/design-agent.tomlβ category:"rad-agents/design"
The category is determined by the parent directory path relative to the agents root.
File Naming Conventionβ
- Configuration files:
{agent-id}.toml - Prompt files:
{agent-id}.md - Keep the agent ID consistent between config and prompt files
Example:
agents/core/
βββ arch-agent.toml
βββ prompts/agents/core/
βββ arch-agent.md
Creating Your First Agentβ
Step 1: Choose a Categoryβ
Decide which category your agent belongs to. If none fit, create a new category directory.
Common Categories:
core- Essential agents used across projectsdesign- Design and architecture agentstesting- Testing and quality assurance agentsdeployment- Deployment and infrastructure agentscustom- Project-specific agents
Step 2: Create the Configuration Fileβ
Create a new TOML file in the appropriate directory:
mkdir -p agents/my-category
touch agents/my-category/my-agent.toml
Add the basic configuration:
[agent]
id = "my-agent"
name = "My Agent"
description = "Does something useful"
prompt_path = "prompts/agents/my-category/my-agent.md"
Step 3: Create the Prompt Templateβ
Create the prompt file at the specified path:
mkdir -p prompts/agents/my-category
touch prompts/agents/my-category/my-agent.md
Write the prompt template following the recommended structure:
# My Agent
Does something useful.
## Role
You are an expert in [domain] responsible for [primary responsibility].
## Capabilities
- Capability 1
- Capability 2
- Capability 3
## Instructions
1. First step
2. Second step
3. Third step
## Examples
### Example 1: Basic Use Case
**Input:**
Sample input
**Expected Output:**
Expected output
Step 4: Validate Your Agentβ
Use the CLI to validate your agent:
rad agents validate
Or validate a specific agent:
rad agents info my-agent
Step 5: Test Discoveryβ
Verify your agent is discovered:
rad agents list
Your agent should appear in the list.
Common Agent Patternsβ
Pattern 1: Architecture Agentβ
Architecture agents design system architecture and make technical decisions.
Configuration:
[agent]
id = "arch-agent"
name = "Architecture Agent"
description = "Defines system architecture and technical design decisions"
prompt_path = "prompts/agents/core/arch-agent.md"
engine = "gemini"
model = "gemini-2.0-flash-exp"
reasoning_effort = "high"
Prompt Structure:
- Role: Software architect
- Capabilities: System design, technology selection, architecture decisions
- Instructions: Requirements analysis, component design, technology selection, documentation
- Examples: Different architecture scenarios
Pattern 2: Code Generation Agentβ
Code generation agents implement features and write production code.
Configuration:
[agent]
id = "code-agent"
name = "Code Implementation Agent"
description = "Implements features and writes production-ready code"
prompt_path = "prompts/agents/core/code-agent.md"
engine = "gemini"
model = "gemini-2.0-flash-exp"
reasoning_effort = "medium"
Prompt Structure:
- Role: Software engineer
- Capabilities: Code implementation, testing, documentation
- Instructions: Specification reading, planning, TDD, implementation, refactoring
- Examples: Different implementation scenarios
Pattern 3: Code Review Agentβ
Review agents analyze code for quality, security, and best practices.
Configuration:
[agent]
id = "review-agent"
name = "Code Review Agent"
description = "Reviews code for quality, security, and best practices"
prompt_path = "prompts/agents/core/review-agent.md"
engine = "gemini"
model = "gemini-2.0-flash-exp"
reasoning_effort = "high"
Prompt Structure:
- Role: Code reviewer
- Capabilities: Bug detection, security analysis, quality assessment
- Instructions: Review checklist, issue prioritization, feedback format
- Examples: Different review scenarios
Pattern 4: Documentation Agentβ
Documentation agents generate comprehensive documentation.
Configuration:
[agent]
id = "doc-agent"
name = "Documentation Agent"
description = "Generates comprehensive documentation and API references"
prompt_path = "prompts/agents/core/doc-agent.md"
engine = "gemini"
model = "gemini-2.0-flash-exp"
reasoning_effort = "medium"
Prompt Structure:
- Role: Technical writer
- Capabilities: README generation, API documentation, tutorials
- Instructions: Documentation types, audience consideration, examples
- Examples: Different documentation types
Pattern 5: Planning Agentβ
Planning agents break down requirements into structured tasks.
Configuration:
[agent]
id = "plan-agent"
name = "Planning Agent"
description = "Breaks down requirements into structured iterations and tasks"
prompt_path = "prompts/agents/core/plan-agent.md"
engine = "gemini"
model = "gemini-2.0-flash-exp"
reasoning_effort = "high"
Prompt Structure:
- Role: Project planner
- Capabilities: Task breakdown, dependency analysis, estimation
- Instructions: Requirements analysis, iteration planning, task definition
- Examples: Different planning scenarios
Advanced Configurationβ
Using the CLI to Create Agentsβ
You can use the rad agents create command to generate agent templates:
rad agents create my-agent "My Agent" \
--description "Agent description" \
--category custom \
--engine gemini \
--model gemini-2.0-flash-exp \
--reasoning medium
This command will:
- Create the agent configuration file
- Create a prompt template file with a basic structure
- Set up the directory structure
Environment-Specific Configurationβ
You can create environment-specific agents by organizing them in different directories:
agents/
βββ development/
β βββ dev-agent.toml
βββ staging/
β βββ staging-agent.toml
βββ production/
βββ prod-agent.toml
Agent Compositionβ
Agents can reference and trigger other agents using trigger behavior:
[agent]
id = "coordinator"
name = "Coordinator Agent"
prompt_path = "prompts/agents/core/coordinator.md"
[agent.trigger_behavior]
trigger_agent_id = "worker-agent"
Best Practicesβ
Prompt Engineeringβ
- Be Specific: Clearly define what the agent should do
- Provide Examples: Include concrete examples of inputs and outputs
- Set Context: Explain the agent's role and responsibilities
- Define Constraints: Specify limitations and boundaries
- Use Structure: Organize prompts with clear sections
- Iterate: Refine prompts based on results
Model Selectionβ
Choose models based on task complexity:
- Simple tasks (code generation, formatting): Use faster models like
gemini-2.0-flash-exp - Complex reasoning (architecture, planning): Use more capable models like
gemini-1.5-pro - Balanced tasks: Use medium-capability models for general-purpose work
Reasoning Effortβ
Match reasoning effort to task requirements:
- Low: Quick responses, simple tasks, code generation
- Medium: General-purpose tasks, balanced performance
- High: Complex reasoning, critical decisions, thorough analysis
Naming Conventionsβ
- Use descriptive IDs:
arch-agentnotagent1 - Use consistent naming:
*-agentsuffix for agents - Keep names concise but clear
- Use kebab-case for IDs
Organizationβ
- Group related agents in the same category
- Use subdirectories for complex category hierarchies
- Keep agent IDs unique within your organization
- Document agent purposes in descriptions
Testingβ
- Test agents with various inputs
- Validate output quality and format
- Test edge cases and error conditions
- Verify agent discovery and configuration
Version Controlβ
- Store agent configs and prompts in version control
- Use meaningful commit messages
- Tag agent versions if needed
- Document changes in agent descriptions
Testing and Validationβ
Validation Commandsβ
Validate all agents:
rad agents validate
Validate with verbose output:
rad agents validate --verbose
Common Validation Errorsβ
- Missing prompt file: Ensure the prompt_path points to an existing file
- Invalid ID: Use kebab-case, no spaces or special characters
- Empty name or description: Provide meaningful values
- Invalid engine/model: Use supported engine and model combinations
- Invalid reasoning_effort: Must be "low", "medium", or "high"
Testing Agent Discoveryβ
List all discovered agents:
rad agents list
List with verbose output:
rad agents list --verbose
Search for agents:
rad agents search "architecture"
Get agent information:
rad agents info arch-agent
Manual Testingβ
- Create test inputs matching your agent's expected format
- Execute the agent with test inputs
- Verify output quality and completeness
- Check for edge cases and error handling
- Validate output format matches expectations
Troubleshootingβ
Agent Not Discoveredβ
Problem: Agent doesn't appear in rad agents list
Solutions:
- Check file location: Ensure agent is in a valid search path
- Verify file extension: Must be
.toml - Check file permissions: Ensure file is readable
- Validate TOML syntax: Use a TOML validator
- Check category derivation: Verify directory structure
Prompt File Not Foundβ
Problem: Validation error: "Prompt file not found"
Solutions:
- Verify prompt_path: Check the path is correct
- Check file exists: Ensure the markdown file exists
- Verify path resolution: Test with absolute path first
- Check relative path: Ensure relative path is correct from workspace root
Invalid Configurationβ
Problem: Validation errors in configuration
Solutions:
- Check required fields: Ensure id, name, description, prompt_path are set
- Validate TOML syntax: Use a TOML parser to check syntax
- Check field types: Ensure values match expected types
- Verify engine/model: Use supported combinations
- Check reasoning_effort: Must be "low", "medium", or "high"
Agent Not Executing Correctlyβ
Problem: Agent produces unexpected output
Solutions:
- Review prompt template: Ensure instructions are clear
- Check examples: Verify examples match expected behavior
- Test with different inputs: Identify patterns in failures
- Refine prompt: Iterate on prompt based on results
- Check model selection: Consider using a different model
Path Resolution Issuesβ
Problem: Relative paths not resolving correctly
Solutions:
- Use absolute paths: Test with absolute paths first
- Check workspace root: Verify current working directory
- Use relative paths from workspace: Ensure paths are relative to project root
- Check directory structure: Verify prompt files are in expected locations
Invalid Capabilities Configurationβ
Problem: Validation error with capabilities section
Solutions:
- Verify
model_classis one of: "fast", "balanced", "reasoning" - Verify
cost_tieris one of: "low", "medium", "high" - Ensure
max_concurrent_tasksis a positive integer - Check TOML syntax for the
[agent.capabilities]section
Sandbox Configuration Issuesβ
Problem: Sandbox not working or validation errors
Solutions:
- Verify sandbox type is supported on your platform (seatbelt is macOS-only)
- For Docker/Podman: Ensure the container runtime is installed and running
- Check that the specified image exists and is accessible
- Verify network_mode is valid: "isolated", "bridged", or "host"
- Test sandbox configuration with a simple command first
Additional Resourcesβ
- Agent Configuration Guide - Detailed configuration reference
- Agent System Architecture - Technical architecture details
- Example Agents - Example agent configurations
- CLI Documentation - Command-line interface reference
Conclusionβ
Creating effective agents requires understanding both the configuration format and prompt engineering. Follow the patterns and best practices outlined in this guide, and iterate based on results. Well-designed agents can significantly improve productivity and consistency across your projects.
Remember:
- Start simple and iterate
- Use examples from existing agents
- Test thoroughly before deploying
- Document your agents clearly
- Share successful patterns with your team
Happy agent creating!