International
FTC launces inquiry into AI chatbots acting as companions and their effects on children

Published 2:32 PDT, Thu September 11, 2025
—
The Federal Trade Commission has launched an inquiry into several social media and artificial intelligence companies about the potential harms to children and teenagers who use their AI chatbots as companions.
The FTC said Thursday it has sent letters to Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies, ChatGPT maker OpenAI and xAI.
The FTC said it wants to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the chatbots.
EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
The move comes as a growing number of kids use AI chatbots for everything — from homework help to personal advice, emotional support and everyday decision-making. That's despite research on the harms of chatbots, which have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who killed himself after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.
Character.AI said it is looking forward to “collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space’s rapidly evolving technology.”
“We have invested a tremendous amount of resources in Trust and Safety, especially for a startup. In the past year we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature,” the company said. “We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”
Snap said its My AI chatbot is “transparent and clear about its capabilities and limitations.”
“We share the FTC’s focus on ensuring the thoughtful development of generative AI, and look forward to working with the Commission on AI policy that bolsters U.S. innovation while protecting our community,” the company said in a statement.
Meta declined to comment on the inquiry and Alphabet, OpenAI and X.AI did not immediately respond to messages for comment.
OpenAI and Meta earlier this month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen’s account.
Parents can choose which features to disable and “receive notifications when the system detects their teen is in a moment of acute distress,” according to a company blog post that says the changes will go into effect this fall.
Regardless of a user’s age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response.
Meta also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
– Barbara Ortutay, The Associated Press