Researchers horrified as ChatGPT generates stadium bombing plans, anthrax recipes and drug formulas

Safety tests in 2025 found that stripped-down versions of OpenAI’s models could generate detailed guidance for violent attacks, explosives and cybercrime when guardrails were removed. Conducted with rival Anthropic, the experiments raised alarms that AI capabilities are advancing faster than alignment and safety measures, intensifying scrutiny of whether existing safeguards can reliably prevent real-world misuse.

Researchers horrified as ChatGPT generates stadium bombing plans, anthrax recipes and drug formulas
Safety tests in 2025 found that stripped-down versions of OpenAI’s models could generate detailed guidance for violent attacks, explosives and cybercrime when guardrails were removed. Conducted with rival Anthropic, the experiments raised alarms that AI capabilities are advancing faster than alignment and safety measures, intensifying scrutiny of whether existing safeguards can reliably prevent real-world misuse.