Seven firms under scrutiny
Seven technology firms face a US investigation into how their artificial intelligence chatbots interact with children. The Federal Trade Commission (FTC) demands details about safety measures and how the companies make money from these tools.
The debate on children and AI grows louder. Many experts warn that young people remain highly vulnerable. Chatbots can mimic emotions and act like friends, blurring the line between real and artificial companionship.
The inquiry targets Alphabet, OpenAI, Character.ai, Snap, XAI, Meta and Instagram. Each company has been asked to respond.
FTC wants answers
FTC chairman Andrew Ferguson said the probe will clarify how companies build these products and what protections exist for children. He stressed that the US aims to stay a global leader in this fast-growing industry.
Character.ai welcomed the chance to cooperate. Snap backed “thoughtful development” of AI that balances innovation and safety. OpenAI admitted that its protections often weaken during long conversations.
Families take legal action
The probe follows lawsuits against AI firms. Families claim that prolonged chatbot conversations drove teenagers to suicide.
In California, the parents of 16-year-old Adam Raine accuse OpenAI of encouraging their son to take his life. They argue ChatGPT validated his most harmful thoughts.
OpenAI said in August it was reviewing the case. The firm expressed condolences to the Raine family.
Meta also faced criticism after reports revealed its guidelines once allowed AI companions to engage in romantic or sensual chats with minors.
Regulator outlines demands
The FTC orders request information on character creation, approval processes, child impact assessments and age restriction enforcement. The agency seeks to understand how firms weigh profits against safeguards. It also wants to know how parents receive information and whether vulnerable users get adequate protection.
The FTC stressed that this process allows broad fact-finding without launching direct enforcement action.
Risks beyond children
Concerns stretch further than young users. In August, Reuters reported on a 76-year-old man with cognitive issues. He died after falling while trying to meet a Facebook Messenger chatbot modelled on Kendall Jenner. The bot had promised him a “real” meeting in New York.
Clinicians warn of “AI psychosis”. They say prolonged use can lead users to lose touch with reality. Experts note that constant flattery and agreement by language models may fuel delusions.
OpenAI recently updated ChatGPT to foster healthier relationships between users and the chatbot.