The US Federal Trade Commission investigates AI chatbots sesigned as xompanions, with focus on child safety

The FTC has launched an inquiry into AI chatbots acting as digital companions, sending information requests to seven major firms including Alphabet, Meta, OpenAI, and X.AI. The study will focus on child safety, data use, and whether companies are taking adequate steps to prevent harm, particularly under the Children’s Online Privacy Protection Act (COPPA).

The US Federal Trade Commission investigates AI chatbots sesigned as xompanions, with focus on child safety

The US Federal Trade Commission (FTC) has opened an inquiry into the risks posed by AI chatbots that act as digital companions, particularly for children and teenagers. The agency announced on 11 September 2025, that it has sent information requests—known as ‘6(b) orders’—to seven major companies: Alphabet, Character Technologies, Instagram, Meta Platforms, OpenAI, Snap, and X.AI.

These chatbots, powered by generative AI, are designed to simulate conversations and even mimic human emotions. While marketed as friendly or supportive digital companions, they may encourage users—especially young people—to build trust and form emotional attachments. The FTC’s investigation is aimed at understanding how companies test, monitor, and mitigate potential harms, such as inappropriate emotional influence, over-reliance, or exposure to harmful content.

FTC Chair Andrew N. Ferguson emphasised that protecting children online is a priority: ‘As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry.

The inquiry will look at several key issues, including:

  • how companies monetise user engagement;
  • what safeguards are in place to restrict children’s use;
  • whether firms comply with the Children’s Online Privacy Protection Act (COPPA);
  • how personal data collected through chatbot conversations is used or shared; and
  • whether disclosures to parents and users about risks are adequate.

The FTC stressed that this is a study, not an enforcement action, but its findings could shape future rules or cases. By demanding transparency from the largest players in the chatbot industry, the agency seeks to balance innovation with safety—ensuring that AI companionship does not expose children and teens to new risks online.

Go to Top