Avatar

Software teams worldwide now rely on AI coding agents to boost productivity and streamline code creation. But security hasn’t kept up. AI-generated code often lacks basic protections: insecure defaults, missing input validation, hardcoded secrets, outdated cryptographic algorithms, and reliance on end-of-life dependencies are common. These gaps create vulnerabilities that can easily be introduced and often go unchecked. 

The industry needs a unified, open, and model-agnostic approach to secure AI coding. 

Today, Cisco is open-sourcing its framework for securing AI-generated code, internally referred to as Project CodeGuard. 

Project CodeGuard is a security framework that builds secure-by-default rules into AI coding workflows. Project CodeGuard offers a community-driven ruleset, translators for popular AI coding agents, and validators to help teams enforce security automatically. Our goal: make secure AI coding the default, without slowing developers down.  

Code Guard Rules

Project CodeGuard is designed to integrate seamlessly across the entire AI coding lifecycle. Before code generation, rules can be used for the design of a product and for spec-driven development. You can use the rules in the “planning phase” of an AI coding agent to steer models toward secure patterns from the start. During code generation, rules can help AI agents to prevent security issues as code is being written. After code generation, AI agents like Cursor, GitHub Copilot, Codex, Windsurf, and Claude Code can use the rules for code review.

Code Guard Before and After

These rules can be used before, during and after code generation. They can be used at the AI agent planning phase or for initial specification-driven engineering tasks. Project CodeGuard rules can also be used to prevent vulnerabilities from being introduced during code generation. They can also be used by automated code-review AI agents. 

For example, a rule focused on input validation could work at multiple stages: it might suggest secure input handling patterns during code generation, flag potentially unsafe user or AI agent input processing in real-time and then validate that proper sanitization and validation logic is present in the final code. Another rule targeting secret management could prevent hardcoded credentials from being generated, alert developers when sensitive data patterns are detected, and verify that secrets are properly externalized using secure configuration management. 

This multi-stage methodology ensures that security considerations are woven throughout the development process rather than being an afterthought, creating multiple layers of protection while maintaining the speed and productivity that make AI coding tools so valuable. 

Note: These rules steer AI coding agents toward safer patterns and away from common vulnerabilities by default. They do not guarantee that any given output is secure. We should always continue to apply standard secure engineering practices, including peer review and other common security best practices. Treat Project CodeGuard as a defense-in-depth layer; not a replacement for engineering judgment or compliance obligations. 

What we’re releasing in v1.0.0 

We are releasing: 

  • Core security rules based on established security best practices and guidance (e.g., OWASP, CWE, etc.) 
  • Automated scripts that act as rule translators for common AI coding agents (e.g., Cursor, Windsurf, GitHub Copilot). 
  • Documentation to help contributors and adopters get started quickly 

Roadmap and How to Get Involved 

This is just the beginning. Our roadmap includes expanding rule coverage across programming languages, integrating additional AI coding platforms, and building automated rule validation. Future enhancements will include additional automated translation of rules to new AI coding platforms as they emerge, and intelligent rule suggestions based on project context and technology stack. The automation will also help maintain consistency across different coding agents, reduce manual configuration overhead, and provide actionable feedback loops that continuously improve rule effectiveness based on community usage patterns. 

 Project CodeGuard thrives on community collaboration. Whether you’re a security engineer, software engineering expert, or AI researcher, there are multiple ways to contribute: 

  • Submit new rules: Help expand coverage for specific languages, frameworks, or vulnerability classes 
  • Build translators: Create integrations for your favorite AI coding tools 
  • Share feedback: Report issues, suggest improvements, or propose new features 

Ready to get started? Visit our GitHub repository and join the conversation. Together, we can make AI-assisted coding secure by default.

Authors

Omar Santos

Distinguished Engineer

Cisco Product Security Incident Response Team (PSIRT) Security Research and Operations