Skip to main content

AWS Orchestrator Configuration

Complete configuration reference for the AWS Orchestrator Agent.


Environment Variables​

Create a .env file in the project root:

# Required: LLM API Keys
OPENAI_API_KEY=sk-your-openai-api-key-here
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key-here

# LLM Configuration
LLM_PROVIDER=openai
LLM_MODEL=gpt-4o
LLM_TEMPERATURE=0.1
LLM_MAX_TOKENS=20000

# Logging
LOG_LEVEL=DEBUG
LOG_TO_CONSOLE=True
LOG_STRUCTURED_JSON=False

# A2A Server
A2A_SERVER_HOST=0.0.0.0
A2A_SERVER_PORT=10102

# Module Path
MODULE_PATH=/path/to/aws-orchestrator-agent

LLM Configuration​

Standard LLM​

VariableDefaultDescription
LLM_PROVIDERopenaiLLM provider (openai, anthropic, azure)
LLM_MODELgpt-4o-miniModel name
LLM_TEMPERATURE0.0Sampling temperature (0.0-1.0)
LLM_MAX_TOKENS15000Maximum tokens per response

Higher LLM (Complex Reasoning)​

VariableDefaultDescription
LLM_HIGHER_PROVIDERopenaiProvider for complex tasks
LLM_HIGHER_MODELgpt-5-miniModel for complex reasoning
LLM_HIGHER_TEMPERATURE0.0Temperature
LLM_HIGHER_MAX_TOKENS15000Max tokens

React Agent LLM (Writer Agent)​

VariableDefaultDescription
LLM_REACT_AGENT_PROVIDERopenaiProvider for file operations
LLM_REACT_AGENT_MODELgpt-4.1-miniModel for Writer Agent
LLM_REACT_AGENT_TEMPERATURE0.0Temperature
LLM_REACT_AGENT_MAX_TOKENS25000Max tokens

Supervisor Configuration​

VariableDefaultDescription
SUPERVISOR_OUTPUT_MODEfull_historyOutput mode
SUPERVISOR_MAX_RETRIES3Max retry attempts
SUPERVISOR_TIMEOUT_SECONDS300Workflow timeout
SUPERVISOR_MAX_CONCURRENT_WORKFLOWS10Max concurrent workflows

A2A Server Configuration​

VariableDefaultDescription
A2A_SERVER_HOSTlocalhostServer host
A2A_SERVER_PORT10102Server port

Logging Configuration​

VariableDefaultDescription
LOG_LEVELINFOLog level (DEBUG, INFO, WARNING, ERROR)
LOG_FILEaws_orchestrator_agent.logLog file path
LOG_TO_CONSOLETrueOutput to console
LOG_TO_FILETrueOutput to file
LOG_STRUCTURED_JSONFalseJSON format logs

Output Configuration​

VariableDefaultDescription
MODULE_PATH./modulesBase path for generated Terraform modules
WORKSPACE_PATHCurrent directoryWorking directory for file operations
BACKUP_ENABLEDTrueCreate backups before overwriting files
BACKUP_DIR.backupsDirectory for backup files

Adding Custom LLM Providers​

For detailed instructions on adding new LLM providers, see the LLM Provider Onboarding Guide.

The guide covers:

  • Implementing the BaseLLMProvider interface
  • Registering providers in the factory
  • Adding required environment variables
  • Testing your custom provider

Docker Configuration​

Environment Variables​

docker run -d -p 10102:10102 \
-e OPENAI_API_KEY=your_key \
-e ANTHROPIC_API_KEY=your_key \
-e LLM_PROVIDER=openai \
-e LLM_MODEL=gpt-4o \
-e LOG_LEVEL=INFO \
--name aws-orchestrator \
sandeep2014/aws-orchestrator-agent:latest

Volume Mounts​

docker run -d -p 10102:10102 \
-e OPENAI_API_KEY=your_key \
-v $(pwd)/modules:/app/modules \
-v $(pwd)/.env:/app/.env \
--name aws-orchestrator \
sandeep2014/aws-orchestrator-agent:latest

Agent Card Configuration​

The agent card defines the A2A protocol metadata:

{
"name": "AWS Orchestrator Agent",
"description": "Autonomous Terraform module generation",
"version": "0.1.0",
"capabilities": [
"terraform_generation",
"aws_infrastructure",
"module_creation"
],
"endpoint": "http://localhost:10102"
}

Location: aws_orchestrator_agent/card/aws_orchestrator_agent.json