Caution
This repository is no longer maintained. It is archived for informational purposes.
Streamline execution of Terraform linters, with actionable AI insights as output.
The Terraform Code Analyzer AI Agent conforms to AGNTCY specs as described https://github.com/agntcy.
This langchain agent runs terraform validate and tflint linters on a set of Terraform code inputs, and interprets the results using OpenAI. By leveraging OpenAI, the agent provides actionable insights and guidance on how to resolve issues identified by the linter.
It can be used by developers who are building GenAI agentic applications that would benefit from insertion of basic linting of Terraform code.
The Terraform Code Analyzer AI Agent offers value to agentic application developers by saving the effort of writing code to run standard terraform linters, providing an out-of-the-box solution that can be easily inserted into agentic applications via its supported APIs.
This repository contains a Terraform Code Analyzer AI Agent. It performs static analysis on Terraform code to detect security risks, misconfigurations, and anti-patterns.
There are two key analysis steps executed with this agent:
- runs
terraform validateand returns the results; if this run fails, then stop. - runs
tflintand returns the results. Note that to allow execution oftflint, it first runsterraform init.
This agent was built with FastAPI, that can operate in two modes:
- As a standard API compatible with LangChain's Agent Protocol — an open-source framework for interfacing with AI agents.
- As a client of the Agent Gateway Protocol (AGP) — a gRPC-based protocol enabling secure and scalable communication between AI agents.
-
Dual Interface Support:
- LangChain Agent Protocol API: Exposes HTTP endpoints following the Agent Protocol spec for easy integration with LangChain-based ecosystems.
- AGP Client (Fire-and-Forget Only): Sends non-blocking, one-way messages via AGP — ideal for asynchronous agent workflows without waiting for a response.
-
Security:
When operating via AGP, all communication is protected using authentication, authorization, and end-to-end encryption. -
JSON-based Logging:
Structured, machine-readable logs to support observability and debugging. -
CORS Configuration:
Enables secure cross-origin API access from web clients or frontends. -
Route Tagging:
Tagged routes for better documentation, navigation, and maintainability. -
Docker Support:
- Containerized service for easy deployment
- Docker Compose for local development
- Comprehensive integration tests
Before installation, ensure you have:
- Python 3.12+ installed
- Docker and Docker Compose installed
- Make installed (for build automation)
- Terraform → Installation Guide
- TFLint → Installation Guide
git clone https://github.com/cisco-ai-agents/tf-code-analyzer-agntcy-agent
cd tf-code-analyzer-agntcy-agentThe easiest way to get started is to use the development installation, which handles all dependencies and Python path configuration:
# Install in development mode (this handles all dependencies and PYTHONPATH)
# Creates virtual environment or uses existing
make install-dev
# Create and activate virtual environment
source venv/bin/activate # On Windows: .\venv\Scripts\activate
# Or install in current environment
make installBefore running the application, ensure you have the following environment variables set in your .env file or in your system environment.
If configuring your AI agent to use OpenAI as its LLM provider, set these variables:
# OpenAI API Configuration
OPENAI_API_KEY=your-openai-api-key-here
OPENAI_MODEL_NAME=gpt-4o # Specify the model name
OPENAI_TEMPERATURE=0.7 # Adjust temperature for response randomnessIf configuring your AI agent to use Azure OpenAI as its LLM provider, set these variables:
# Azure OpenAI API Configuration
AZURE_OPENAI_API_KEY=your-azure-api-key-here
AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com
AZURE_OPENAI_DEPLOYMENT_NAME=your-deployment-name # Deployment name in Azure
AZURE_OPENAI_API_VERSION=your-azure-openai-api-version # API version
OPENAI_TEMPERATURE=0.7 # Adjust temperature for response randomnessAGP_GATEWAY_ENDPOINT = "http://<your-agp-gateway-host>:<port>"If running the client, set these variables to interact with GitHub:
# GitHub Repository Configuration
GH_REPO_URL=https://your-github-url # The repository to analyze
GH_BRANCH=main # The branch containing the code to be analyzed
# Optional GitHub Authentication
GH_TOKEN=your-github-token # (Optional) Provide a token for private repos🔹 Note: If analyzing a public repository, GH_TOKEN is optional.
✅ Now you're ready to run the application!
You can run the application by executing:
# If using development installation
python app/main.py
# Or using Make
make runThe Server uses the workflow server models at agent_workflow_server/generated/models to
serve ACP endpoints.
Notice that many models import other models from the same package, so moving or renaming any path or file name will result on server not working. Do not change anything unless you know what you are doing. This hierarchy mimics the official from workflow-srv repository.
On a successful run, you should see logs in your terminal similar to the snippet below. The exact timestamps, process IDs, and file paths will vary:
{"timestamp": "2025-03-11 13:24:36,754", "level": "INFO", "message": "Logging is initialized. This should appear in the log file.", "module": "logging_config", "function": "configure_logging", "line": 142, "logger": "app", "pid": 5004}
{"timestamp": "2025-03-11 13:24:36,754", "level": "INFO", "message": "Starting FastAPI application...", "module": "main", "function": "main", "line": 155, "logger": "app", "pid": 5004}
{"timestamp": "2025-03-11 13:24:36,758", "level": "INFO", "message": ".env file loaded from <your_cloned_repo_path>/.env", "module": "utils", "function": "load_environment_variables", "line": 64, "logger": "root", "pid": 5004}
INFO: Started server process [5004]
INFO: Waiting for application startup.
{"timestamp": "2025-03-11 13:24:36,864", "level": "INFO", "message": "Starting Terraform Code Analyzer Agent", "module": "main", "function": "lifespan", "line": 39, "logger": "root", "pid": 5004}
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8133 (Press CTRL+C to quit)
{"timestamp": "2025-03-21 17:01:31,084", "level": "INFO", "message": "AGP client started for agent: cisco/default/<bound method AgentContainer.get_local_agent of <agp_api.agent.agent_container.AgentContainer object at 0x106dbba10>>", "module": "gateway_container", "function": "start_server", "line": 321, "logger": "agp_api.gateway.gateway_container", "pid": 67267}This output confirms that:
- Logging is properly initialized.
- The server is listening on
0.0.0.0:8133. - Your environment variables (like
.env file loaded) are read.
Change to client folder
python client/stateless_clientOn a successful remote graph run you should see logs in your terminal similar to the snippet below:
{"timestamp": "2025-03-11 13:26:29,622", "level": "ERROR", "message": "{'event': 'final_result', 'result': {'github_details': {'repo_url': '<your_repo_url>', 'github_token': '<your_token>', 'branch': '<your_branch>'}, 'static_analyzer_output': '<analyzer_output>'}}", "module": "stateless_client", "function": "<module>", "line": 174, "logger": "__main__", "pid": 7529}To enable agent-to-agent communication via the Agent Gateway Protocol (AGP), you'll first need to run the AGP Gateway locally. A shell script is included to simplify this.
Start the AGP Gateway
From the root of the project, run:
./client/agp/run_agp_gateway.shRun the client
Make sure to run the client in a separate terminal as the service.
python client/agp/agp_client.pyChange to client/acp folder
python stateless_client.pyOn a successful remote graph run you should see logs in your terminal similar to the snippet below:
{"asctime": "2025-04-27 23:14:38,238", "levelname": "ERROR", "pathname": "stateless_client.py", "module": "stateless_client", "funcName": "main", "message": "", "exc_info": null, "event": "final_result", "result": "- outputs.tf: Error: Duplicate output definition\n\n An output named \"web_server_public_ip\" was already defined at main.tf:107,1-30. Output names must be unique within a module."}tf-code-analyzer-agent/
├── app/ # Main application code
│ ├── core/ # Core functionality
│ ├── api/ # API endpoints
│ └── graph/ # Graph processing
├── tests/ # Test files
│ ├── integration/ # Integration tests
│ └── rest/ # REST API tests
├── client/ # Client applications
├── requirements.txt # Production dependencies
├── requirements-test.txt # Test dependencies
├── setup.py # Package configuration
├── Dockerfile # Container definition
├── docker-compose.yml # Local development services
└── Makefile # Build automation
-
Running Tests
make test -
Starting App and AGP Gateway Services via Docker
make docker-up
-
Stopping Docker Services
make docker-down
-
Building the Docker Image
docker build -t tf-code-analyzer .
The Dockerfile includes:
- Python 3.12 slim base image
- Git and curl for repository access
- Rust toolchain for dependencies
- Terraform installation
- Application code and dependencies
If you encounter import errors like ModuleNotFoundError: No module named 'app':
-
Ensure you've installed the package in development mode:
make install-dev
-
Verify your virtual environment is activated:
source venv/bin/activate # On Windows: .\venv\Scripts\activate
-
Check your PYTHONPATH:
echo $PYTHONPATH
It should include the project root directory.
- Format: The application is configured to use JSON logging by default. Each log line provides a timestamp, log level, module name, and the message.
- Location: Logs typically go to stdout when running locally. If you configure a file handler or direct logs to a centralized logging solution, they can be written to a file (e.g.,
logs/app.log) or shipped to another service. - Customization: You can change the log level (
info,debug, etc.) or format by modifying environment variables or the logger configuration in your code. If you run in Docker or Kubernetes, ensure the logs are captured properly and aggregated where needed.
By default, the API documentation is available at:
http://0.0.0.0:8133/docs(Adjust the host and port if you override them via environment variables.)
You need to install Rust: https://www.rust-lang.org/tools/install
Run the server
langgraph devPopulate the Github input field with:
{
"repo_url": "https://<your_repo_url>",
"github_token": "<your_github_token>",
"branch": "<your_github_branch>"
}Upon successful execution, you should see:
See the open issues for a list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated. For detailed contributing guidelines, please see CONTRIBUTING.md
Distributed under the Apache-2.0 License. See LICENSE for more information.
cisco-outshift-ai-agents@cisco.com
Project Link: https://github.com/cisco-ai-agents/tf-code-analyzer-agntcy-agent
- tflint for the linter.
- Langgraph for the agentic platform.
- https://github.com/othneildrew/Best-README-Template, from which this readme was adapted
For more information about our various agents, please visit the agntcy project page.
