Your Model’s Memory Has Been Compromised: Adversarial Hubness in RAG Systems
Prompt injections and jailbreaks remain a major concern for AI security, and for good reason: models remain susceptible to users tricking models into doing or saying things like bypassing guardrails or leaking system prompts. But AI deployments don’t

















