OpenAI to Permit Mature Content in ChatGPT for Verified Adults Starting December
OpenAI announced that beginning December 2025 it will allow mature and erotic content on ChatGPT, but only for users who verify their age as adults. Sam Altman framed the shift as part of a “treat adult users like adults” principle a deliberate move away from the stricter limitations the company had previously enforced in the name of mental health safety Altman said that ChatGPT had been held back by protections designed to guard vulnerable users, but that advances in monitoring and safeguard tools now give OpenAI confidence to relax restrictions in many cases. Alongside mature content, OpenAI will allow users more control over the tone and personality of ChatGPT making it more human-like or expressive (using emojis, acting like a companion) if the user opts in
This move comes amid broader regulatory and cultural pressure. Meta, for example, is tightening how much content minors can see on its platforms, adopting PG-13 standards for users under 18. OpenAI also recently introduced parental controls and age-appropriate experiences for minors to reduce exposure to sensitive content From my perspective as a journalist, this change carries potential and risk in equal measure. On one hand, it could increase engagement and revenue by catering to adult users whose requests were previously blocked. On the other, it amplifies questions about how AI platforms navigate consent, mental health, moderation, and cultural standards One key risk is verification and enforcement. How robust will their age verification system be? Could it be bypassed or misused? Another is boundary control: allowing content in some domains but not others drawing the line between permissible “mature content” and what remains harmful or illegal is fraught. There’s also regulatory exposure: governments already debate AI content limits, child protection laws, and platform liability. OpenAI may face scrutiny in multiple jurisdictions.
A real analogy when streaming platforms introduced “adult mode” filters or content ratings, they had to build systems to ensure children were blocked, parental controls worked, and that content labeling was accurate. AI adds extra complexity because it generates content dynamically, rather than just serving pre-rated media. The systems must detect intent, context, tone, and possible harm in real time
Tags:
Ai