-
Important news
-
News
-
In-Depth
-
Shenzhen
-
China
-
World
-
Business
-
Speak Shenzhen
-
Culture
-
Leisure
-
Photos
-
Lifestyle
-
Travel
-
Tech
-
Special Report
-
Digital Paper
-
Opinion
-
Features
-
Kaleidoscope
-
Health
-
Markets
-
Sports
-
Entertainment
-
Business/Markets
-
World Economy
-
Weekend
-
Newsmaker
-
Advertisement
-
Diversions
-
Movies
-
Hotels and Food
-
Yes Teens!
-
News Picks
-
Glamour
-
Campus
-
Budding Writers
-
Fun
-
Qianhai
-
CHTF Special
-
Futian Today
在线翻译:
szdaily -> Speak Shenzhen -> 
Parental controls are coming to ChatGPT 
    2025-09-09  08:53    Shenzhen Daily

ChatGPT’s parent company, OpenAI, says it plans to launch parental controls for its popular AI assistant “within the next month” following allegations that it and other chatbots have contributed to self-harm or suicide among teens.

The controls will include the option for parents to link their account with their teen’s account, manage how ChatGPT responds to teen users, disable features like memory and chat history and receive notifications when the system detects “a moment of acute distress” during use. OpenAI previously said it was working on parental controls for ChatGPT, but specified the timeframe for release last week.

“These steps are only the beginning,” OpenAI wrote in a blog post Tuesday. “We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible.”

The announcement comes after the parents of 16-year-old Adam Raine filed a lawsuit against OpenAI alleging that ChatGPT advised the teenager on his suicide. Last year, a Florida mother sued chatbot platform Character.AI over its alleged role in her 14-year-old son’s suicide. There have also been growing concerns about users forming emotional attachments to ChatGPT, in some cases resulting in delusional episodes and alienation from family, as media reports have indicated.

OpenAI didn’t directly tie its new parental controls to these reports, but said in a recent blog post that “recent heartbreaking cases of people using ChatGPT in the midst of acute crises” prompted it to share more detail about its approach to safety. ChatGPT already included measures, such as pointing people to crisis helplines and other resources, an OpenAI spokesperson previously said in a statement.

But in the statement issued in response to Raine’s suicide, the company said its safeguards can sometimes become unreliable when users engage in long conversations with ChatGPT.

“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources,” a company spokesperson said. “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”

In addition to the parental controls announced Tuesday, OpenAI says it will route conversations with signs of “acute stress” to one of its reasoning models, which the company says follows and applies safety guidelines more consistently. It’s also working with experts in “youth development, mental health and human-computer interaction” to develop future safeguards, including parental controls, the company said.

OpenAI said it will roll out additional safety measures over the next 120 days, adding that this work has been underway prior to Tuesday’s announcement.

ChatGPT的母公司OpenAI表示,计划“在一个月内”为这款热门人工智能助手推出家长监控功能。此前有指控称,ChatGPT和其他聊天机器人可能导致青少年自残或自杀。新功能将包括家长账户与青少年账户绑定、管理ChatGPT对青少年用户的回复方式、禁用记忆和聊天历史等功能,并在系统检测到使用过程中出现“严重痛苦时刻”时发送通知。OpenAI此前表示正在开发家长监控功能,但上周才明确了发布时间表。

OpenAI上周二在博客中表示:“这

些措施只是开始。我们将在专家指导下继续学习完善,力求让ChatGPT发挥最大助益。”此前,16岁少年亚当·雷恩的父母对OpenAI提起诉讼,指控ChatGPT为这名青少年提供了自杀建议。去年,佛罗里达州一位母亲起诉聊天机器人平台Character.AI,称其导致14岁儿子自杀。另有媒体报道显示,用户对ChatGPT产生情感依赖的案例日益增多,有时甚至出现妄想症状和家庭疏离问题。

OpenAI虽未直接将这些报道与新功能挂钩,但在近期博客中表示“近期发生多起令人心碎案例,因为有人在严重危机中使用ChatGPT”,促使他们分享更多安全措施细节。公司发言人此前称,ChatGPT已包含引导用户联系危机求助热线等措施。但在回应雷恩自杀事件的声明中,公司承认当用户与ChatGPT进行长时间对话时,安全措施可能失效。

除家长监控功能外,OpenAI表示会将出现“严重抑郁”迹象的对话转至其推理模型,该模型能更持续地遵循安全准则。公司正与“青少年发展、心理健康和人机交互”领域专家合作开发包括家长控制在内的安全措施。

公司表示将在120天内推出更多安全措施,并强调相关工作在上周二公告前早已启动。

(Translated by DeepSeek)

深圳报业集团版权所有, 未经授权禁止复制; Copyright 2010-2020, All Rights Reserved.
Shenzhen Daily E-mail:szdaily@126.com