Building towards age prediction

OpenAI News
Building towards age prediction

Over the past two weeks, we’ve been expanding our conversations with experts, advocacy groups and policymakers who support and guide our efforts to make ChatGPT as helpful as possible.

We’re continuing to improve safety for all users, but today we’re sharing more about how we’re strengthening protections for teens. When some of our principles are in conflict, we prioritize teen safety ahead of privacy and freedom—and we explain our thinking more here. It’s crucial to have effective tools to guide how these technologies show up in teens’ lives, in both the easy moments and the hard ones.

Teens are growing up with AI, and it’s on us to make sure ChatGPT meets them where they are. The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult.

Today, we’re sharing that we’re building toward a long-term system to understand whether someone is over or under 18, so their ChatGPT experience can be tailored appropriately. When we identify that a user is under 18, they will automatically be directed to a ChatGPT experience with age-appropriate policies, including blocking graphic sexual content and, in rare cases of acute distress, potentially involving law enforcement to ensure safety.

This isn’t easy to get right, and even the most advanced systems will sometimes struggle to predict age. If we are not confident about someone’s age or have incomplete information, we’ll take the safer route and default to the under-18 experience—and give adults ways to prove their age to unlock adult capabilities.

## Parental controls

In the meantime, parental controls will be the most reliable way for families to guide how ChatGPT shows up in their homes. As we previously shared, these controls will be available by the end of the month and will allow parents to:

These controls will add to features available for all users, including in-app reminders during long sessions to encourage breaks.

## Continuing to learn and improve

We’ll continue to share progress as we go. In the meantime, we want to thank the partners, advocates, and experts who are sharing feedback and pushing us to improve. Your voices are shaping this work in ways that will matter for hundreds of millions of people.

How we monitor internal coding agents for misalignment Safety Mar 19, 2026

OpenAI Japan announces Japan Teen Safety Blueprint to put teen safety first Safety Mar 17, 2026

Introducing GPT-5.4 mini and nano Company Mar 17, 2026

Our Research * Research Index * Research Overview * Research Residency * OpenAI for Science * Economic Research

Latest Advancements * GPT-5.3 Instant * GPT-5.3-Codex * GPT-5 * Codex

Safety * Safety Approach * Security & Privacy * Trust & Transparency

ChatGPT * Explore ChatGPT(opens in a new window) * Business * Enterprise * Education * Pricing(opens in a new window) * Download(opens in a new window)

Sora * Sora Overview * Features * Pricing * Sora log in(opens in a new window)

API Platform * Platform Overview * Pricing * API log in(opens in a new window) * Documentation(opens in a new window) * Developer Forum(opens in a new window)

For Business * Business Overview * Solutions * Contact Sales

Company * About Us * Our Charter * Foundation * Careers * Brand

Support * Help Center(opens in a new window)

More * News * Stories * Livestreams * Podcast * RSS

Terms & Policies * Terms of Use * Privacy Policy * Other Policies

(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)

OpenAI © 2015–2026 Manage Cookies

English United States

Originally published on OpenAI News.