Skip to main content

AWS Orchestrator Agent

AWS Orchestrator Agent is a sophisticated, autonomous multi-agent system that generates enterprise-level AWS Terraform modules through intelligent research, deep analysis, and AI-powered code generation. Built with LangGraph and the Google A2A protocol, it delivers production-ready infrastructure automation through coordinated specialist agents.


Status

AspectDetails
Status✅ Ready
Versionv0.1.0
Maintained ByTalkOps Team
ProtocolGoogle A2A Protocol
FrameworkLangGraph Multi-Agent
GitHubtalkops-ai/aws-orchestrator-agent

Key Features

FeatureDescription
Autonomous Terraform GenerationGenerates complete AWS Terraform modules autonomously
Multi-Agent Architecture7+ specialized agents with focused responsibilities
A2A ProtocolFirst-class Google A2A protocol integration
Multi-LLM SupportOpenAI, Anthropic, Azure, and more providers
Enterprise-GradeProduction-ready security and compliance
Deep Research20-25 minute thorough analysis per module

Architecture

Agent Components

AgentRole
Main SupervisorOrchestrates entire workflow lifecycle
Planner Sub-SupervisorDeep research and execution planning
Generator Swarm7 specialized agents for code generation
Writer React AgentFile system operations
Validation AgentQuality assurance (coming soon)

Generator Swarm Agents

AgentResponsibility
Resource ConfigurationGenerates Terraform resource blocks
Variable DefinitionCreates variable definitions with validation
Data SourceGenerates data source blocks
Local ValuesCreates local value blocks
Output DefinitionGenerates output definitions
Backend GeneratorCreates backend configuration
README GeneratorGenerates documentation

A2A Protocol Integration

The AWS Orchestrator Agent is designed as a first-class A2A (Agent-to-Agent) protocol agent, enabling seamless integration with enterprise agent ecosystems and multi-agent coordination.

Key A2A Components

ComponentDescription
A2A Executor IntegrationAWSOrchestratorAgentExecutor implements the A2A protocol
Supervisor Agent AdapterBridges LangGraph-based Supervisor with A2A protocol
A2A Server IntegrationDeployable as an A2A agent server
State ManagementEnterprise-grade workflow state tracking across agent boundaries

Workflow

⏱️ Processing Time: 20-25 minutes for enterprise-grade modules


Use Cases

Use CaseDescription
🏗️ Enterprise IaCGenerate complete Terraform modules for AWS services with best practices
🔄 DevOps AutomationIntegrate with CI/CD pipelines for automated infrastructure testing and validation
📚 Knowledge ManagementCreate comprehensive documentation and maintain infrastructure knowledge bases
🛡️ Security & ComplianceEnsure security best practices and validate compliance requirements

Key Benefits

BenefitDetails
Autonomous OrchestrationIntelligent task delegation with real-time progress tracking
A2A Protocol IntegrationSeamless integration with other A2A-compliant agents
Modular Architecture7+ specialized agents with state isolation and easy extensibility
Advanced CoordinationDependency-aware handoffs with priority-based routing
Production-ReadyComprehensive error handling, structured logging, and HCL validation
Complete ModulesFull Terraform structure with auto-generated documentation

Prerequisites

RequirementDetails
Python3.12+
Terraform CLIRequired for validation
AWS CLIFor deployment (optional)
LLM API KeyOpenAI, Anthropic, or Azure
Time Allocation20-25 minutes per enterprise module

Quick Start

docker pull sandeep2014/aws-orchestrator-agent:latest

docker run -d -p 10102:10102 \
-e OPENAI_API_KEY=your_key \
--name aws-orchestrator \
sandeep2014/aws-orchestrator-agent:latest

Standalone Installation

# Clone the repository
git clone https://github.com/talkops-ai/aws-orchestrator-agent.git
cd aws-orchestrator-agent

# Install with uv
uv venv --python=3.12
source .venv/bin/activate
uv pip install -e .

# Create .env file
echo "OPENAI_API_KEY=your_key" > .env

Run the A2A Server

aws-orchestrator-agent \
--host 0.0.0.0 \
--port 10102 \
--agent-card aws_orchestrator_agent/card/aws_orchestrator_agent.json

LLM Provider Support

ProviderStatus
OpenAI✅ Supported (GPT-4o, GPT-4o-mini)
Anthropic✅ Supported (Claude models)
Azure OpenAI✅ Supported
Additional ProvidersExtensible architecture

Configure via environment variables:

LLM_PROVIDER=openai
LLM_MODEL=gpt-4o-mini
LLM_TEMPERATURE=0.0
LLM_MAX_TOKENS=15000

📖 Need to enable other LLM providers? Follow the LLM Provider Onboarding Guide for detailed configuration instructions.


ResourceURL
Docker Hubsandeep2014/aws-orchestrator-agent
DiscordJoin Community
LicenseApache 2.0