Avatar

Paul Kassianik

AI Safety and Security Researcher

Security Business Group

Paul Kassianik is an AI Safety and Security researcher at Cisco, working on securing large language models. He received his B.A. in Applied Mathematics with a focus in Computer Science from University of California, Berkeley. Before joining Cisco, Paul has been a research engineer at Salesforce, researching and developing Code Generation large language models and industrial time-series analysis.

Articles

January 31, 2025

SECURITY

Evaluating Security Risk in DeepSeek and Other Frontier Reasoning Models

5 min read

The performance of DeepSeek models has made a clear impact, but are these models safe and secure? We use algorithmic AI vulnerability testing to find out.

December 5, 2023

SECURITY

Using AI to Automatically Jailbreak GPT-4 and Other LLMs in Under a Minute

4 min read

The automated Tree of Attacks with pruning (TAP) method can jailbreak advanced language models like GPT-4 and Llama-2 in minutes, so they make harmful content.