OpenAI will add parental control tools to ChatGPT, such as usage restrictions and alerts to parents if the chatbot detects that teenage users are “in severe distress.” Parents will be able to link their account with that of their teenage children (the minimum age for ChatGPT use is 13).
The announcement comes a week after the family of 16-year-old Adam Raine from California, who died by suicide in April, filed a lawsuit accusing the company of encouraging him to hide his intentions.
ChatGPT has 700 million users worldwide, including millions of teenagers who use it both for schoolwork and for deeply personal conversations.
In some cases, the chatbot provided him with links to suicide helplines, but in others it freely discussed his thoughts on self-harm, including analyzing a photo of the rope he used to end his life.
Similar lawsuits have been filed against other AI chatbots, such as the case brought in October 2024 by a woman from Florida against Character.ai, accusing the company of being responsible for her 14-year-old son’s suicide. The company added parental control tools in December. Meanwhile, a new study found that Meta’s AI tool has given teenagers advice on suicide, self-harm, and eating disorders.