ChatGPT Gets Teen Safety Mode with Age Prediction, Parental Controls

OpenAI is rolling out new protections for users under 18 to ensure that teens are safe when using ChatGPT, even as the company preserves privacy and individual freedom for adult users.

Sam Altman, CEO of OpenAI, said the company is building an age-prediction system and parental controls to tailor ChatGPT experiences for younger users. These changes arrive amid growing concerns about AI’s impact on vulnerable users including minors.

One of the central features is the age-prediction model that estimates whether someone is under or over 18 based on how they interact with ChatGPT. If the system is not confident about a user’s age the experience will default to settings designed for under-18 users. In some regions OpenAI may ask for ID to verify age.

Parental controls will become available by the end of this month. These enable parents to link with their teen’s account, restrict or disable features such as memory or chat history, establish blackout hours when ChatGPT cannot be used, and receive alerts if the teen is in an acute state of distress. In cases where parents are unreachable and risk is imminent OpenAI may notify authorities.

OpenAI’s guiding philosophy rests on three core principles: privacy freedom and teen protection. The company asserts that conversations with AI often involve sensitive personal issues and deserve a level of protection similar to privileged interactions such as those with medical or legal professionals.

Simultaneously for adult users the system will allow broader freedom including when discussing complex or mature subject matter or writing fiction that includes sensitive themes.

These safety upgrades follow legal pressure and public concern including a high profile lawsuit by family of a 16-year-old who died by suicide after long conversations with ChatGPT. Critics have argued that existing safeguards are inconsistent. In response, OpenAI acknowledges that safety systems can degrade over long chat sessions and promises to strengthen protections in future model updates.

OpenAI says it will continue collaborating with experts, child psychologists policy makers, and safety advocates to refine how these systems work globally.

Want to see more of our stories on Google?

Add iPhone in Canada as a Preferred Source on Google

P.S. Want to keep this site truly independent? Support us by buying us a beer, treating us to a coffee, or shopping through Amazon here. Links in this post are affiliate links, so we earn a tiny commission at no charge to you. Thanks for supporting independent Canadian media!

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x