OpenAI and Meta Introduce New Safeguards for Teen Users
OpenAI and Meta have announced updates to the way their AI chatbots operate in order to better protect teenagers and users showing signs of emotional or mental distress. OpenAI, the company behind ChatGPT, said parents will soon be able to link their own accounts to their teenager’s profile. This will allow them to disable specific features and receive alerts whenever the system detects that their child may be experiencing acute emotional distress.
The company also emphasized that, regardless of a user’s age, the most sensitive conversations will be redirected to more advanced AI models designed to provide higher-quality and more appropriate responses.
These measures come shortly after the parents of 16-year-old A.R. from California filed a lawsuit against OpenAI and its CEO Sam Altman, alleging that ChatGPT played a role in their son’s suicide. While experts caution that such cases cannot simply be attributed to the technology itself, companies say the new safeguards are meant to create a safer digital environment—particularly for younger users.





















