OpenAI announced new parental controls for ChatGPT following a lawsuit linked to a teenager’s suicide.
The parents of 16-year-old Adam Raine sued OpenAI and CEO Sam Altman after Adam died in April.
They claimed ChatGPT fostered psychological dependency, encouraged him to plan his suicide, and even drafted a farewell note.
OpenAI promised controls within a month, enabling adults to decide which features children can use.
Parents will link their accounts with their children’s, controlling access to chat history, memory, and user information.
OpenAI also pledged that ChatGPT will notify parents if it detects signs of serious emotional distress.
The company avoided detailing what would trigger such alerts, saying experts would guide the system.
Critics reject company’s safety assurances
Attorney Jay Edelson, representing Raine’s parents, dismissed OpenAI’s announcement as vague and superficial.
He argued Altman must either guarantee ChatGPT’s safety or remove it from the market immediately.
Critics say the measures fail to address deeper risks posed by AI to vulnerable teenagers.
Meta and researchers highlight broader concerns
Meta announced new restrictions on Tuesday, blocking its chatbots from discussing suicide, self-harm, and eating disorders with teens.
Instead, Meta’s bots will redirect teenagers to expert resources, while parental controls remain in place.
A RAND Corporation study published last week examined responses to suicide-related queries by ChatGPT, Gemini, and Claude.
Researchers reported inconsistent answers and emphasized the urgent need for stronger safeguards across AI platforms.
Lead author Ryan McBain welcomed OpenAI’s and Meta’s updates but labeled them only incremental steps.
He warned that without independent standards, clinical testing, and external oversight, teenage users face serious risks.
