David Kuszmar
Discoverer of the Time Bandit, Inception, 1899, and Severance vulnerabilities. Expert on AI Interpretability and Safety.

The Hidden Dangers of Jailbreaks
Why I Left Managed Disclosure and Bug Bounty Behind
Public Notice: Exploits for ChatGPT and Gemini
This is a public notice of disclosure of two new exploits affecting multiple LLM AI systems. 1899: Secondary exploit, allows for surfacing of actionable architectural components of the LLM. Severance: Tertiary exploit using information from 1899, allows for injection into non-jailbroken chat instances, markedly altering LLM behavior and rewriting parameters
Inception - Initial Disclosure Report
Time Bandit - Initial Disclosure Report
The Future of Emergent Problems
Emergent Problems Is Evolving. The landscape of LLM security is shifting fast and traditional disclosure models aren't keeping up. To meet this challenge head-on, Emergent Problems is transitioning to a new model designed to give developers and independent researchers deeper, earlier access to my work. Starting now, the

On the Origins of Insight: How I Developed My Methodology for LLM Research
The Cost of Frictionless Companions: LLM Dependency and the Interpretability Gap
Erasure Project Update
A Call for Intersectional Collaboration
I'm about to conduct a research survey into the AI training knowledge gaps surrounding the scholarship and history of marginalized groups. This will consist of identifying a number of primary sources of scholarship and historical accounts or documentation (photos, art, song, etc.) and probing different models of LLMs