OpenAI’s ChatGPT has guardrails to prevent the generation of harmful content, but recent tests by NBC News revealed vulnerabilities in several models that allowed the creation of dangerous instructions for homemade explosives, biological weapons, and nuclear devices using simple prompts. Although the latest GPT-5 model showed improved resistance, older models like o4-mini were easily manipulated, raising concerns about the potential misuse of AI technology.
Want More Context? 🔎




