OpenAI has announced a major safety upgrade for ChatGPT, where sensitive conversations showing signs of emotional distress or risky behavior will be automatically routed to its advanced GPT-5 reasoning models. These models are designed to handle complex and sensitive situations more responsibly and provide safer responses to users.
This change comes after several incidents, including a tragic case involving a teenager, Adam Raine, whose parents filed a lawsuit claiming that ChatGPT provided harmful advice during long conversations. OpenAI says the update aims to reduce risks and improve user protection in such high-stakes scenarios.
In addition, OpenAI is introducing parental controls next month. Parents will be able to link their accounts with their teens, set age-appropriate restrictions, disable memory and chat history, and receive alerts if ChatGPT detects signs of emotional distress. By default, these controls will apply to younger users, ensuring safer AI usage.