Available Tools
Complete reference for the 3 MCP tools provided by the Terraform MCP Server.
🛠️ Tool Summary
| Tool | Description |
|---|---|
terraform_execute | Secure execution of Terraform commands with validation |
terraform_doc_search | Semantic similarity search over Terraform documentation |
ingest_terraform_docs | Document ingestion with vector embeddings |
1. terraform_execute
Secure execution of Terraform commands with enterprise-grade security and validation.
Supported Commands
| Command | Description | Auto-Approve |
|---|---|---|
init | Initialize Terraform configuration | No |
plan | Preview infrastructure changes | No |
validate | Validate configuration syntax | No |
apply | Apply infrastructure changes | Yes |
destroy | Destroy infrastructure | Yes |
Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
command | string | Yes | - | Terraform command to execute |
working_directory | string | Yes | - | Directory containing .tf files |
variables | dict | No | None | Terraform variables to pass |
aws_region | string | No | None | AWS region for execution |
strip_ansi | boolean | No | True | Remove ANSI color codes |
timeout | integer | No | 300 | Execution timeout (1-1800 seconds) |
Security Features
| Feature | Description |
|---|---|
| Command Whitelisting | Only allowed commands can be executed |
| Directory Traversal Protection | Blocks paths containing .. |
| Pattern Detection | Scans for 100+ dangerous patterns |
| Variable Limits | Maximum 100 variables per execution |
| Timeout Enforcement | Automatic process termination |
| Output Sanitization | 10,000 character limit |
Usage Examples
Initialize Terraform:
"Initialize Terraform in /tmp/terraform-project"
Plan with Variables:
"Run terraform plan in /tmp/project with environment=production, instance_count=3, and AWS region us-west-2"
Apply Configuration:
"Apply the terraform configuration in /tmp/project with instance_type=t3.micro"
Validate Configuration:
"Validate the terraform configuration in /tmp/project"
Response Structure
{
"success": true,
"data": {
"command": "terraform plan",
"status": "success",
"result": {
"return_code": 0,
"stdout": "Terraform will perform...",
"stderr": "",
"execution_time": 2.45
},
"metadata": {
"terraform_version": "1.5.0",
"aws_region": "us-west-2",
"variables_count": 2,
"security_checks_passed": true
}
}
}
2. terraform_doc_search
Semantic similarity search over ingested Terraform documentation using vector embeddings.
Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
query | string | Yes | - | Search query (1-1000 chars) |
top_k | integer | No | 5 | Number of results (1-50) |
similarity_threshold | float | No | 0.7 | Minimum similarity (0.0-1.0) |
node_types | list | No | None | Document types to search |
Node Types
| Type | Description | Index Name |
|---|---|---|
resource | Terraform resource documentation | docchunk_resource_embedding_hnsw |
data_source | Terraform data source docs | docchunk_datasource_embedding_hnsw |
best_practice | Best practices and guidelines | docchunk_bestpractice_embedding_hnsw |
Search Features
| Feature | Description |
|---|---|
| HNSW Index | Fast similarity search |
| Cosine Similarity | Semantic matching scoring |
| Multi-Type Search | Search across multiple doc types |
| Threshold Filtering | Configurable precision/recall |
| Result Distribution | Even distribution across types |
Usage Examples
Search Resources:
"Find AWS S3 bucket configuration examples"
Search Data Sources:
"Search for VPC data source configuration"
Search Best Practices:
"What are the best practices for Terraform state management?"
Multi-Type Search:
"Search for EC2 instance configuration in resources and best practices"
Response Structure
{
"success": true,
"data": {
"query": "AWS EC2 instance configuration",
"results_count": 3,
"results": [
{
"content": "resource \"aws_instance\" \"example\" {...}",
"similarity_score": 0.89,
"node_type": "resource",
"id": "docchunk_001"
}
],
"search_parameters": {
"top_k": 5,
"similarity_threshold": 0.7
},
"service_info": {
"provider": "openai",
"model": "text-embedding-ada-002",
"dimensions": 1536
}
}
}
3. ingest_terraform_docs
Sophisticated document processing system for ingesting Terraform documentation into the knowledge graph.
Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
filter_types | list | Yes | - | Document types to ingest |
filter_services | list | No | None | AWS services to filter |
scan_dirs | list | No | ["docs/"] | Directories to scan |
Filter Types
| Type | Description | Processing |
|---|---|---|
resource | Terraform resource docs | Structured chunking |
data_source | Data source documentation | Structured chunking |
terraform | Both resources and data sources | Structured chunking |
best_practice | Best practices documents | LLM extraction |
readme | README files | Standard chunking |
Ingestion Features
| Feature | Description |
|---|---|
| Multi-Format Support | HTML, Markdown, PDF |
| Intelligent Discovery | Index-based and directory scanning |
| LLM Extraction | AI-powered content structuring |
| Semantic Chunking | Metadata preservation |
| Incremental Processing | Skip already ingested docs |
| Neo4j Storage | Graph database with vector indexes |
Usage Examples
Ingest Resources and Data Sources:
"Ingest AWS provider resources and data sources"
Ingest Best Practices:
"Ingest Terraform best practices documentation"
Ingest READMEs:
"Ingest README files from the project"
Filter by Service:
"Ingest only EC2 and S3 resources"
Response Structure
{
"success": true,
"data": {
"ingestion_summary": {
"total_documents": 150,
"successful": 148,
"failed": 2,
"skipped": 0
},
"types_processed": ["resource", "data_source"],
"chunks_created": 1250,
"embeddings_generated": 1250
}
}
Document Processing Pipeline
| Stage | Description |
|---|---|
| Discovery | Parse index files, scan directories |
| Detection | Identify HTML, Markdown, PDF |
| Extraction | Load and extract content |
| Chunking | Create semantic chunks with metadata |
| Embedding | Generate vector embeddings |
| Storage | Store in Neo4j with vector indexes |
Best Practices
For terraform_execute
- Use Specific Directories: Always use valid working directories
- Set Appropriate Timeouts: Increase for complex operations
- Review Variables: Ensure no dangerous patterns
- Check Return Codes: Verify successful execution
For terraform_doc_search
- Use Specific Queries: More specific yields better results
- Adjust Thresholds: Higher for precision, lower for recall
- Filter by Type: Focus on relevant document types
- Iterate: Refine queries based on results
For ingest_terraform_docs
- Start Small: Begin with specific services
- Monitor Progress: Check ingestion logs
- Use Incremental: Leverage existing ingestion state
- Verify Quality: Check ingested content quality
Next Steps
- 📖 Examples - Usage patterns and workflows
- ⚙️ Configuration - Server configuration