Articles
Introducing the AI Agent Security Scanner for IDEs: Verify Your Agents
4 min read
AI-powered integrated developer environments (IDEs) like Cursor, VS Code, and Windsurf now include agents that utilize Model Context Protocol (MCP) servers, run skills, and generate entire codebases. But as these tools gain access to file systems...
Identifying and remediating a persistent memory compromise in Claude Code
4 min read
We recently discovered a method to compromise Claude Code’s memory and maintain persistence beyond our immediate session into every project, every session, and even after reboots. In this post, we’ll break down how we were able to poison an AI.....
Your Model’s Memory Has Been Compromised: Adversarial Hubness in RAG Systems
3 min read
Prompt injections and jailbreaks remain a major concern for AI security, and for good reason: models remain susceptible to users tricking models into doing or saying things like bypassing guardrails or leaking system prompts. But AI deployments don’t just process prompts at inference time (meaning when you are actively querying the model): they may also retrieve, rank, and synthesize external data in real time. Each of those steps is a potential adversarial entry point.