Configuration
The Kubernetes Agent (k8s-autopilot) supports flexible configuration via Environment Variables and Docker Compose. This page details how to configure the agent for both local development and production usage.
1. Quick Start (Docker)
The easiest way to configure the agent is passing environment variables to the Docker container.
Essential Variables
| Variable | Description | Example |
|---|---|---|
OPENAI_API_KEY | Required if using OpenAI models. | sk-... |
ANTHROPIC_API_KEY | Required if using Anthropic models. | sk-ant-... |
WORKSPACE_DIR | Internal path to store generated charts. | /tmp/helm-charts |
Running with Config
docker run -d -p 10102:10102 \
-e OPENAI_API_KEY=your_key_here \
-v $(pwd)/my-charts:/tmp/helm-charts \
--name k8s-autopilot \
sandeep2014/k8s-autopilot:latest
Note: Mounting a volume to
/tmp/helm-chartsis critical if you want to access the generated Helm charts on your host machine.
2. LLM Multi-Model Strategy
The agent uses a Split-Brain Architecture to optimize cost and performance. You can route different cognitive tasks to different models.
Configuration Variables
| Variable | Purpose | Recommended Model | Description |
|---|---|---|---|
LLM_MODEL | Standard | gpt-4o-mini / gpt-4o | Used for general conversation and simple tools. |
LLM_HIGHER_MODEL | Complex | gpt-4o / claude-3-5-sonnet | Used for complex context gathering and synthesis. |
LLM_DEEPAGENT_MODEL | Reasoning | o1-mini / claude-3-opus | Used by Planner and Validator swarms for deep reasoning and code generation. |
LLM_PROVIDER | Default | openai | The default provider backend (openai, anthropic, google_genai, azure_openai). |
Example: Cost-Optimized Setup
Use a cheaper model for chat, but a powerful reasoning model for critical planning.
# General Chat (Fast & Cheap)
LLM_PROVIDER=openai
LLM_MODEL=gpt-4o-mini
# Deep Reasoning (The "Brain")
LLM_DEEPAGENT_PROVIDER=openai
LLM_DEEPAGENT_MODEL=o1-mini
3. Full Stack Configuration (Docker Compose)
To run the full stack (Agent + MCP Server + Web UI), use this docker-compose.yml.
services:
k8s-autopilot:
image: sandeep2014/k8s-autopilot:latest
container_name: k8s-autopilot
ports:
- "10102:10102"
environment:
# Required: OpenAI API Key (loaded from .env file in the same directory)
- OPENAI_API_KEY=${OPENAI_API_KEY}
# Helm MCP Server Configuration
# Connects to the 'helm-mcp-server' service defined below
- HELM_MCP_SERVER_HOST=helm-mcp-server
- HELM_MCP_SERVER_PORT=9000
- HELM_MCP_SERVER_TRANSPORT=sse
- HELM_MCP_SERVER_DISABLED=false
# ArgoCD MCP Server Configuration
# Connects to the 'argocd-mcp-server' service defined below
- ARGOCD_MCP_SERVER_HOST=argocd-mcp-server
- ARGOCD_MCP_SERVER_PORT=8765
- ARGOCD_MCP_SERVER_TRANSPORT=sse
- ARGOCD_MCP_SERVER_DISABLED=false
# LLM Configuration
- LLM_PROVIDER=openai
- LLM_MODEL=gpt-4o-mini
- LLM_HIGHER_PROVIDER=openai
- LLM_HIGHER_MODEL=gpt-5-mini
- LLM_DEEPAGENT_PROVIDER=openai
- LLM_DEEPAGENT_MODEL=o4-mini
# Optional: Logging level
- LOG_LEVEL=INFO
volumes:
# Persist generated Helm charts to a local 'helm_output' directory
- ./helm_output:/tmp/helm-charts
depends_on:
- helm-mcp-server
restart: unless-stopped
networks:
- k8s-autopilot-net
helm-mcp-server:
image: sandeep2014/talkops-mcp:helm-mcp-server-latest
container_name: helm-mcp-server
environment:
- MCP_PORT=9000
- MCP_ALLOW_WRITE=true
- MCP_LOG_LEVEL=INFO
# Explicitly set KUBECONFIG location to match the volume mount
- KUBECONFIG=/app/.kube/config
volumes:
# Mount your local kubeconfig so the server can access your cluster
# NOTE: Ensure this path points to your valid kubeconfig file
- ${HOME}/.kube/config:/app/.kube/config:ro
ports:
# Expose port 9000 for local debugging/direct access if needed
- "9000:9000"
restart: unless-stopped
networks:
- k8s-autopilot-net
argocd-mcp-server:
image: sandeep2014/talkops-mcp:argocd-mcp-server-latest
container_name: argocd-mcp-server
environment:
# ArgoCD Server Configuration
# NOTE: Use host.docker.internal to access ArgoCD server running on the host
- ARGOCD_SERVER_URL=${ARGOCD_SERVER_URL:-https://host.docker.internal:9090}
- ARGOCD_AUTH_TOKEN=${ARGOCD_AUTH_TOKEN}
- ARGOCD_INSECURE=${ARGOCD_INSECURE:-true}
# SSH Key for Git Repository Access (optional)
- SSH_PRIVATE_KEY_PATH=/app/.ssh/id_rsa
# MCP Server Configuration
- MCP_PORT=8765
- MCP_ALLOW_WRITE=true
- MCP_LOG_LEVEL=INFO
volumes:
# Mount SSH key for Git repository access (optional, for SSH-based repos)
# NOTE: Ensure this path points to your SSH private key file
- ${HOME}/.ssh/id_ed25519:/app/.ssh/id_rsa:ro
ports:
# Expose port 8765 for local debugging/direct access if needed
- "8765:8765"
restart: unless-stopped
networks:
- k8s-autopilot-net
talkops-ui:
image: sandeep2014/talkops:latest
container_name: talkops-ui
environment:
- K8S_AGENT_URL=http://localhost:10102
ports:
- "8080:80"
depends_on:
- k8s-autopilot
restart: unless-stopped
networks:
- k8s-autopilot-net
networks:
k8s-autopilot-net:
driver: bridge
4. Local Development (Standalone)
If you are developing or debugging the agent locally using uv:
-
Clone & Install:
git clone https://github.com/talkops-ai/k8s-autopilot.git
cd k8s-autopilot
uv sync -
Environment Setup: Create a
.envfile in the root:OPENAI_API_KEY=sk-xxxx
LLM_MODEL=gpt-4o
LOG_LEVEL=DEBUG -
Run Agent:
uv run --active k8s-autopilot \
--host localhost \
--port 10102 \
--agent-card k8s_autopilot/card/k8s_autopilot.json
5. Helm & Cluster Configuration
⚠️ Important Dependency: The Helm Management Agent (operational capability) relies entirely on the Helm MCP Server to perform cluster actions. The ArgoCD Onboarding Sub-Agent relies on the ArgoCD MCP Server for all ArgoCD project, repository, and application operations.
For specific configuration details regarding:
- Kubeconfig paths
- RBAC Permissions
- Helm Repository setup
- Cluster connectivity
Please refer to the Integrations & MCP Servers documentation or the specific Helm/ArgoCD MCP Server guides.