Avatar

Blaine Nelson

Engineering Architect

Security Business Group

Dr. Blaine Nelson earned his B.S. (University of South Carolina), M.S. and Ph.D (UC Berkeley) degrees in Computer Science. He was a Humboldt Postdoctoral Research Fellow at the University of Tübingen (2011-13) and a Postdoctoral Researcher at the University of Potsdam (2013-14) in Germany. As a graduate student and post-doc, Dr. Nelson co-established the foundations of adversarial machine learning. He has twice co-chaired the ACM CCS workshop on Artificial Intelligence and Security, and co-coordinated the Dagstuhl Perspectives Workshop on Machine Learning Methods for Computer Security (2012). Following his post-doctoral work, Dr. Nelson worked as a software engineer in Google's fraud detection group (2014-2016) where he built models and designed infrastructure for large scale machine learning. He then became a senior software engineer at Google's counter-abuse technology team (2016-2021) where he designed and built a large scale machine learning workflow system. Currently, Dr. Nelson is an engineering architect at Cisco where he works to investigate the security of machine learned models by finding potential flaws or vulnerabilities in their behavior and develop protections to guard such models against these threats.

Articles

May 28, 2024

SECURITY

Fine-Tuning LLMs Breaks Their Safety and Security Alignment

6 min read

Fine-tuning large language models can compromise their safety and security, making them more vulnerable to jailbreaks and harmful outputs.