AWS Orchestrator Agent
AWS Orchestrator Agent is a sophisticated, autonomous multi-agent system that generates enterprise-level AWS Terraform modules through intelligent research, deep analysis, and AI-powered code generation. Built with LangGraph and the Google A2A protocol, it delivers production-ready infrastructure automation through coordinated specialist agents.
Status
| Aspect | Details |
|---|---|
| Status | ✅ Ready |
| Version | v0.1.0 |
| Maintained By | TalkOps Team |
| Protocol | Google A2A Protocol |
| Framework | LangGraph Multi-Agent |
| GitHub | talkops-ai/aws-orchestrator-agent |
Key Features
| Feature | Description |
|---|---|
| Autonomous Terraform Generation | Generates complete AWS Terraform modules autonomously |
| Multi-Agent Architecture | 7+ specialized agents with focused responsibilities |
| A2A Protocol | First-class Google A2A protocol integration |
| Multi-LLM Support | OpenAI, Anthropic, Azure, and more providers |
| Enterprise-Grade | Production-ready security and compliance |
| Deep Research | 20-25 minute thorough analysis per module |
Architecture
Agent Components
| Agent | Role |
|---|---|
| Main Supervisor | Orchestrates entire workflow lifecycle |
| Planner Sub-Supervisor | Deep research and execution planning |
| Generator Swarm | 7 specialized agents for code generation |
| Writer React Agent | File system operations |
| Validation Agent | Quality assurance (coming soon) |
Generator Swarm Agents
| Agent | Responsibility |
|---|---|
| Resource Configuration | Generates Terraform resource blocks |
| Variable Definition | Creates variable definitions with validation |
| Data Source | Generates data source blocks |
| Local Values | Creates local value blocks |
| Output Definition | Generates output definitions |
| Backend Generator | Creates backend configuration |
| README Generator | Generates documentation |
A2A Protocol Integration
The AWS Orchestrator Agent is designed as a first-class A2A (Agent-to-Agent) protocol agent, enabling seamless integration with enterprise agent ecosystems and multi-agent coordination.
Key A2A Components
| Component | Description |
|---|---|
| A2A Executor Integration | AWSOrchestratorAgentExecutor implements the A2A protocol |
| Supervisor Agent Adapter | Bridges LangGraph-based Supervisor with A2A protocol |
| A2A Server Integration | Deployable as an A2A agent server |
| State Management | Enterprise-grade workflow state tracking across agent boundaries |
Workflow
⏱️ Processing Time: 20-25 minutes for enterprise-grade modules
Use Cases
| Use Case | Description |
|---|---|
| 🏗️ Enterprise IaC | Generate complete Terraform modules for AWS services with best practices |
| 🔄 DevOps Automation | Integrate with CI/CD pipelines for automated infrastructure testing and validation |
| 📚 Knowledge Management | Create comprehensive documentation and maintain infrastructure knowledge bases |
| 🛡️ Security & Compliance | Ensure security best practices and validate compliance requirements |
Key Benefits
| Benefit | Details |
|---|---|
| Autonomous Orchestration | Intelligent task delegation with real-time progress tracking |
| A2A Protocol Integration | Seamless integration with other A2A-compliant agents |
| Modular Architecture | 7+ specialized agents with state isolation and easy extensibility |
| Advanced Coordination | Dependency-aware handoffs with priority-based routing |
| Production-Ready | Comprehensive error handling, structured logging, and HCL validation |
| Complete Modules | Full Terraform structure with auto-generated documentation |
Prerequisites
| Requirement | Details |
|---|---|
| Python | 3.12+ |
| Terraform CLI | Required for validation |
| AWS CLI | For deployment (optional) |
| LLM API Key | OpenAI, Anthropic, or Azure |
| Time Allocation | 20-25 minutes per enterprise module |
Quick Start
Docker (Recommended)
docker pull sandeep2014/aws-orchestrator-agent:latest
docker run -d -p 10102:10102 \
-e OPENAI_API_KEY=your_key \
--name aws-orchestrator \
sandeep2014/aws-orchestrator-agent:latest
Standalone Installation
# Clone the repository
git clone https://github.com/talkops-ai/aws-orchestrator-agent.git
cd aws-orchestrator-agent
# Install with uv
uv venv --python=3.12
source .venv/bin/activate
uv pip install -e .
# Create .env file
echo "OPENAI_API_KEY=your_key" > .env
Run the A2A Server
aws-orchestrator-agent \
--host 0.0.0.0 \
--port 10102 \
--agent-card aws_orchestrator_agent/card/aws_orchestrator_agent.json
LLM Provider Support
| Provider | Status |
|---|---|
| OpenAI | ✅ Supported (GPT-4o, GPT-4o-mini) |
| Anthropic | ✅ Supported (Claude models) |
| Azure OpenAI | ✅ Supported |
| Additional Providers | Extensible architecture |
Configure via environment variables:
LLM_PROVIDER=openai
LLM_MODEL=gpt-4o-mini
LLM_TEMPERATURE=0.0
LLM_MAX_TOKENS=15000
📖 Need to enable other LLM providers? Follow the LLM Provider Onboarding Guide for detailed configuration instructions.
Links
| Resource | URL |
|---|---|
| Docker Hub | sandeep2014/aws-orchestrator-agent |
| Discord | Join Community |
| License | Apache 2.0 |