Artificial Intelligence - AI
Reading Between the Pixels: Failure Modes in Vision Language Models
This post is Part 2 of a two-part series on multimodal typographic attacks. In Part 1 of “Reading Between the Pixels,” we demonstrated that text–image embedding distance correlates with typographic prompt injection success: conditions that push....
Try Cisco AI Defense Explorer Edition in this hands-on lab
A practical DevNet lab for connecting a public OpenAI-compatible target to Cisco AI Defense Explorer, running a Quick Scan, and reviewing AI red team findings.
Defining Model Provenance: A Constitution for AI Supply Chain Safety and Security
When it comes to AI models, one of the hardest questions to answer is deceptively simple: where did this model actually come from? We addressed part of this problem with Model Provenance Kit, an open-source tool that fingerprints models at the.....
Introducing Model Provenance Kit: Know Where Your AI Models Come From
The importance of understanding a model’s origins has been a frequent topic of discussion among researchers and industry experts, and our own AI research confirms that AI supply chain security remains a weak link. Tracking where models come from....
Securing Enterprise AI: Cisco AI Defense Expands to Google Cloud
Enterprise AI adoption isn't slowing down — and neither are the risks that come with it. According to the 2025 Cisco Cybersecurity Readiness Index, 86% of organizations experienced an AI-related security incident in the past 12 months, yet...
Introducing the AI Agent Security Scanner for IDEs: Verify Your Agents
AI-powered integrated developer environments (IDEs) like Cursor, VS Code, and Windsurf now include agents that utilize Model Context Protocol (MCP) servers, run skills, and generate entire codebases. But as these tools gain access to file systems...
Reading Between the Pixels: Assessing Prompt Injection Attack Success in Images
This post is Part 1 of a two-part series on multimodal typographic attacks. This blog was written in collaboration between Ravi Balakrishnan, Amy Chang, Sanket Mendapara, and Ankit Garg. Modern generative AI models and agents increasingly
Non-Obvious Patterns in Building Enterprise AI Assistants
Lessons from building production AI systems that nobody talks about. The conversation around AI agents has moved fast. A year ago, everyone was optimizing RAG pipelines. Now the discourse centers on context engineering, MCP/A2A protocols, agentic.
Identifying and remediating a persistent memory compromise in Claude Code
We recently discovered a method to compromise Claude Code’s memory and maintain persistence beyond our immediate session into every project, every session, and even after reboots. In this post, we’ll break down how we were able to poison an AI.....


















