Audio By Carbonatix
Chatbot site Character.ai is cutting off teenagers from having conversations with virtual characters, after facing intense criticism over the kinds of interactions young people were having with online companions.
The platform, founded in 2021, is used by millions to talk to chatbots powered by artificial intelligence (AI).
But it is facing several lawsuits in the US from parents, including one over the death of a teenager, with some branding it a "clear and present danger" to young people.
Now, Character.ai says that from 25 November, under-18s will only be able to generate content such as videos with their characters, rather than talk to them as they can currently.
Online safety campaigners have welcomed the move but said the feature should never have been available to children in the first place.
Character.ai said it was making the changes after "reports and feedback from regulators, safety experts, and parents", which have highlighted concerns about its chatbots' interactions with teens.
Experts have previously warned that the potential for AI chatbots to make things up, be overly encouraging, and feign empathy can pose risks to young and vulnerable people.
"Today's announcement is a continuation of our general belief that we need to keep building the safest AI platform on the planet for entertainment purposes," Character.ai boss Karandeep Anand told BBC News.
He said AI safety was "a moving target" but something the company had taken an "aggressive" approach to, with parental controls and guardrails.
Online safety group Internet Matters welcomed the announcement, but it said safety measures should have been built in from the start.
"Our own research shows that children are exposed to harmful content and put at risk when engaging with AI, including AI chatbots," it said.
Character.ai has been criticised in the past for hosting potentially harmful or offensive chatbots that children could talk to.
Avatars impersonating British teenagers Brianna Ghey, who was murdered in 2023, and Molly Russell, who took her life at the age of 14 after viewing suicide material online, were discovered on the site in 2024 before being taken down.
Later, in 2025, the Bureau of Investigative Journalism (TBIJ) found a chatbot based on paedophile Jeffrey Epstein which had logged more than 3,000 chats with users.
The outlet reported the "Bestie Epstein" avatar continued to flirt with its reporter after they said they were a child. It was one of several bots flagged by TBIJ that were subsequently taken down by Character.ai.
The Molly Rose Foundation - which was set up in memory of Molly Russell - questioned the platform's motivations.
"Yet again it has taken sustained pressure from the media and politicians to make a tech firm do the right thing, and it appears that Character AI is choosing to act now before regulators make them," said Andy Burrows, its chief executive.
Wake-up call
Mr Anand said the company's new focus was on providing "even deeper gameplay [and] role-play storytelling" features for teens - adding these would be "far safer than what they might be able to do with an open-ended bot".
New age verification methods will also come in, and the company will fund a new AI safety research lab.
Social media expert Matt Navarra said it was a "wake-up call" for the AI industry, which is moving "from permissionless innovation to post-crisis regulation".
"When a platform that builds a teen experience still then pulls the plug, it's saying that filtered chats aren't enough when the tech's emotional pull is strong," he told BBC News.
"This isn't about content slips. It's about how AI bots mimic real relationships and blur the lines for young users," he added.
Mr Navarra also said the big challenge for Character.ai will be to create an engaging AI platform which teens still want to use, rather than move to "less safe alternatives".
Meanwhile, Dr Nomisha Kurian, who has researched AI safety, said it was "a sensible move" to restrict teens using chatbots.
"It helps to separate creative play from more personal, emotionally sensitive exchanges," she said.
"This is so important for young users still learning to navigate emotional and digital boundaries.
"Character.ai's new measures might reflect a maturing phase in the AI industry - child safety is increasingly being recognised as an urgent priority for responsible innovation."
Latest Stories
-
The final mic: A nation pauses as Daddy Lumba takes his bow
9 minutes -
Amin Adam rejects ‘blind loyalty’ claims, says Northern support for Bawumia is based on competence
16 minutes -
Ghana Card becomes mandatory for insurance transactions from 2026
18 minutes -
December in GH: Beware of ‘I don’t have Cedis borgas’
20 minutes -
No $300 daily allowance: GAF explains real UN peacekeeping pay
21 minutes -
One dead, another in critical condition after wild bees’ attack
30 minutes -
Michael Okyere Baafi hosts 2025 Christmas ‘Shop for Free’ initiative for elderly in New Juaben South
32 minutes -
Opoku-Agyemang urges long-term investment to grow Africa’s film and creative economy
36 minutes -
Analysing Bank of Ghana’s $10bn forex intervention in 2025
39 minutes -
LA police investigate ‘apparent homicide’ at Rob Reiner’s home
42 minutes -
Health Ministry secures GH¢22.8bn to upgrade facilities and expand workforce
42 minutes -
ECOWAS denounces coup plots, moves to bolster West Africa’s security architecture
45 minutes -
Brown University: ‘We made eye contact’: Ghanaian student describes alleged gunman bursting into lecture hall
51 minutes -
Galamsey and betting fuel rising school dropouts in Northern Ghana – Eduwatch
55 minutes -
Beyond Kontrol 2025 kicks off Christmas with all-star support for Medikal
1 hour
