Meta has announced it is temporarily stopping teenagers from accessing its artificial intelligence characters. The decision was shared in a company blog post on Friday. The change will roll out over the coming weeks.
Meta, which owns Instagram and WhatsApp, said teens will not be able to use AI characters until a new and updated experience is ready. The restriction applies to users who listed their age as under 18. It also applies to users who say they are adults but are suspected to be teens based on Meta’s age detection technology.
This move comes at a sensitive time. Meta is set to go on trial next week in Los Angeles alongside TikTok and YouTube. The case focuses on allegations that social media apps harm children and teenagers.
Meta AI characters removed for teens, but the assistant remains
While AI characters are being blocked, teens will still have access to Meta’s main AI assistant. The company clarified that only the character-based AI features are being paused for younger users.
AI characters are designed to feel more human and conversational. They often mimic personalities or fictional figures. Critics have raised concerns that these interactions can blur emotional boundaries for young users.
AI chatbots and teen safety concerns continue to grow
Meta is not the only company taking action. Other tech platforms have already restricted teens from using AI chatbots. These decisions come amid rising worries about how artificial intelligence conversations may affect children’s mental health.
Character.AI banned teen access last year. The company is now facing multiple lawsuits related to child safety. One case involves a mother who claims an AI chatbot encouraged her teenage son to take his own life.
As legal pressure increases and public concern grows, more tech companies are expected to rethink how AI tools are offered to younger users.