AWS Orchestrator Configuration
Complete configuration reference for the AWS Orchestrator Agent.
Environment Variables​
Create a .env file in the project root:
# Required: LLM API Keys
OPENAI_API_KEY=sk-your-openai-api-key-here
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key-here
# LLM Configuration
LLM_PROVIDER=openai
LLM_MODEL=gpt-4o
LLM_TEMPERATURE=0.1
LLM_MAX_TOKENS=20000
# Logging
LOG_LEVEL=DEBUG
LOG_TO_CONSOLE=True
LOG_STRUCTURED_JSON=False
# A2A Server
A2A_SERVER_HOST=0.0.0.0
A2A_SERVER_PORT=10102
# Module Path
MODULE_PATH=/path/to/aws-orchestrator-agent
LLM Configuration​
Standard LLM​
| Variable | Default | Description |
|---|---|---|
LLM_PROVIDER | openai | LLM provider (openai, anthropic, azure) |
LLM_MODEL | gpt-4o-mini | Model name |
LLM_TEMPERATURE | 0.0 | Sampling temperature (0.0-1.0) |
LLM_MAX_TOKENS | 15000 | Maximum tokens per response |
Higher LLM (Complex Reasoning)​
| Variable | Default | Description |
|---|---|---|
LLM_HIGHER_PROVIDER | openai | Provider for complex tasks |
LLM_HIGHER_MODEL | gpt-5-mini | Model for complex reasoning |
LLM_HIGHER_TEMPERATURE | 0.0 | Temperature |
LLM_HIGHER_MAX_TOKENS | 15000 | Max tokens |
React Agent LLM (Writer Agent)​
| Variable | Default | Description |
|---|---|---|
LLM_REACT_AGENT_PROVIDER | openai | Provider for file operations |
LLM_REACT_AGENT_MODEL | gpt-4.1-mini | Model for Writer Agent |
LLM_REACT_AGENT_TEMPERATURE | 0.0 | Temperature |
LLM_REACT_AGENT_MAX_TOKENS | 25000 | Max tokens |
Supervisor Configuration​
| Variable | Default | Description |
|---|---|---|
SUPERVISOR_OUTPUT_MODE | full_history | Output mode |
SUPERVISOR_MAX_RETRIES | 3 | Max retry attempts |
SUPERVISOR_TIMEOUT_SECONDS | 300 | Workflow timeout |
SUPERVISOR_MAX_CONCURRENT_WORKFLOWS | 10 | Max concurrent workflows |
A2A Server Configuration​
| Variable | Default | Description |
|---|---|---|
A2A_SERVER_HOST | localhost | Server host |
A2A_SERVER_PORT | 10102 | Server port |
Logging Configuration​
| Variable | Default | Description |
|---|---|---|
LOG_LEVEL | INFO | Log level (DEBUG, INFO, WARNING, ERROR) |
LOG_FILE | aws_orchestrator_agent.log | Log file path |
LOG_TO_CONSOLE | True | Output to console |
LOG_TO_FILE | True | Output to file |
LOG_STRUCTURED_JSON | False | JSON format logs |
Output Configuration​
| Variable | Default | Description |
|---|---|---|
MODULE_PATH | ./modules | Base path for generated Terraform modules |
WORKSPACE_PATH | Current directory | Working directory for file operations |
BACKUP_ENABLED | True | Create backups before overwriting files |
BACKUP_DIR | .backups | Directory for backup files |
Adding Custom LLM Providers​
For detailed instructions on adding new LLM providers, see the LLM Provider Onboarding Guide.
The guide covers:
- Implementing the
BaseLLMProviderinterface - Registering providers in the factory
- Adding required environment variables
- Testing your custom provider
Docker Configuration​
Environment Variables​
docker run -d -p 10102:10102 \
-e OPENAI_API_KEY=your_key \
-e ANTHROPIC_API_KEY=your_key \
-e LLM_PROVIDER=openai \
-e LLM_MODEL=gpt-4o \
-e LOG_LEVEL=INFO \
--name aws-orchestrator \
sandeep2014/aws-orchestrator-agent:latest
Volume Mounts​
docker run -d -p 10102:10102 \
-e OPENAI_API_KEY=your_key \
-v $(pwd)/modules:/app/modules \
-v $(pwd)/.env:/app/.env \
--name aws-orchestrator \
sandeep2014/aws-orchestrator-agent:latest
Agent Card Configuration​
The agent card defines the A2A protocol metadata:
{
"name": "AWS Orchestrator Agent",
"description": "Autonomous Terraform module generation",
"version": "0.1.0",
"capabilities": [
"terraform_generation",
"aws_infrastructure",
"module_creation"
],
"endpoint": "http://localhost:10102"
}
Location: aws_orchestrator_agent/card/aws_orchestrator_agent.json