Articles
Announcing a New Framework for Securing AI-Generated Code
3 min read
Software teams worldwide now rely on AI coding agents to boost productivity and streamline code creation. But security hasn’t kept up. AI-generated code often lacks basic protections: insecure defaults, missing input validation, hardcoded secrets, outdated cryptographic algorithms, and reliance on end-of-life dependencies are common. These gaps create vulnerabilities that can easily be introduced and often […]
The Need for a Strong CVE Program
2 min read
The CVE program is the foundation for standardized vulnerability disclosure and management. With its future uncertain, global organizations face challenges.
Advancing AI Security and Contributing to CISA’s JCDC AI Efforts
1 min read
Discover how CISA's new AI Security Incident Collaboration Playbook strengthens AI security and resilience.
Introducing Cisco’s AI Security Best Practice Portal
2 min read
Cisco's AI Security Portal contains resources to help you secure your AI implementation, whether you're a seasoned professional or new to the field.
Introducing the Coalition for Secure AI (CoSAI)
2 min read
Announcing the launch of the Coalition for Secure AI (CoSAI) to help securely build, deploy, and operate AI systems to mitigate AI-specific security risks.
Enhancing AI Security Incident Response Through Collaborative Exercises
2 min read
Take-aways from a tabletop exercise led by CISA's Joint Cyber Defense Collaborative (JCDC), which brought together government and industry leaders to enhance our collective ability to respond to AI-related security incidents.
Introducing the Open Supply-Chain Information Modeling (OSIM) Technical Committee
4 min read
OSIM is a great advancement towards a more secure and resilient supply chain ecosystem.
Securing the LLM Stack
7 min read
Learn how to secure the LLM stack, which is essential to protecting data and preserving user trust, as well as ensuring the operational integrity, reliability, and ethical use of these powerful AI models.
Securing AI: Navigating the Complex Landscape of Models, Fine-Tuning, and RAG
7 min read
Bad actors leverage AI, escalating the complexity and scale of threats. We need robust security measures and proper monitoring in developing, fine-tuning, and deploying AI models.
3