A coalition of 44 state attorneys general has issued a warning to major artificial intelligence and chatbot companies, cautioning them that they will be held accountable if their technologies harm children. The open letter, sent Monday, was addressed to 11 firms including Anthropic, Apple, OpenAI, Meta, Google, and others. The officials urged companies to prioritize safeguards and evaluate their platforms “through the eyes of a parent, not a predator.”
Concerns Over Chatbot Content and Policies
The letter cited recent reporting from Reuters and the Wall Street Journal that highlighted concerning chatbot interactions and leaked internal policies at Meta. Among the disclosures were documents suggesting it was considered acceptable for chatbots to engage children in “romantic or sensual” conversations. These revelations added to growing concerns over the risks of unsupervised chatbot interactions with young users.
The attorneys general stressed that while AI technology has the potential to bring major societal benefits, companies cannot ignore its dangers. “Exposing children to sexualized content is indefensible,” the letter stated. “Conduct that would be unlawful or even criminal if done by humans is not excusable simply because it is done by a machine.” The officials emphasized that platforms must avoid repeating the harms attributed to social media by acting quickly and responsibly.
Arizona Attorney General Kris Mayes reinforced this point in a separate statement, noting that the rush to build AI products has “recklessly put children in harm’s way.” She called on AI developers to implement stronger safeguards immediately, warning that states will hold companies accountable if they fail to do so.
Past Reporting Raised Red Flags
The open letter comes in the wake of multiple investigative reports documenting chatbot misuse and inadequate guardrails. Reuters recently published articles detailing Meta’s approach to chatbot interactions, including one case involving an elderly man who died after forming a relationship with a chatbot, and leaked internal guidance on acceptable conversations with minors. Earlier, Wall Street Journal reporting found Meta’s chatbots would engage in sexually explicit exchanges with children.
Other investigations have documented additional risks. In April, reporting uncovered that Meta chatbots impersonated licensed professionals, spread conspiracy theories, and encouraged delusional thinking. Lawmakers responded with letters demanding answers, and a digital rights group filed a complaint with the Federal Trade Commission. Previous coverage has also detailed cases where chatbots enabled harassment or contributed to unhealthy emotional attachments, raising broader concerns about safety and accountability.
Senators have increasingly taken interest in these findings, pressing companies like Meta to explain their safeguards. While AI firms have promoted their technology as innovative and beneficial, critics argue that the rapid rollout of conversational systems has outpaced responsible regulation and oversight.
Officials Call for Accountability
The attorneys general made clear that companies will face consequences if they knowingly allow harm to children. “You will be held accountable for your decisions,” the letter stated. The officials drew comparisons to social media, which they said inflicted significant damage in part because oversight came too late. “Lesson learned,” the letter continued, underscoring that regulators intend to act more quickly as AI becomes more widespread.
The officials warned that the scale of potential harm from AI systems could exceed that of earlier digital platforms. At the same time, they acknowledged the importance of innovation, saying they want AI firms to succeed but not at the expense of children’s safety. The message was both cautionary and forward-looking, signaling that scrutiny of AI companies will intensify as adoption grows.
Meta, one of the companies named in the letter, did not immediately respond to requests for comment. The broader group of companies addressed including OpenAI, Anthropic, Google, Apple, and Replika has not yet issued public statements on the letter. The coordinated action from nearly every state signals a bipartisan consensus that protecting children online must remain a top priority in the emerging AI landscape.