A
Anna Washenko
Meta has faced some serious questions about how it allows its underage users to interact with AI-powered chatbots. Most recently, internal communications obtained by the New Mexico Attorney General's Office revealed that although Meta CEO Mark Zuckerberg was opposed to the chatbots having "explicit" conversations with minors, he also rejected the idea of placing parental controls on the feature.
Reuters reported that in an exchange between two unnamed Meta employees, one wrote that we "pushed hard for parental controls to turn GenAI off – but GenAI leadership pushed back stating Mark decision.” New Mexico is suing Meta on charges that the company “failed to stem the tide of damaging sexual material and sexual propositions delivered to children;” the case is scheduled to go to trial in February. We’ve reached out to Meta for comment and will update with any response.
Despite only being available for a brief time, Meta's chatbots have already accumulated quite a history of behavior that veers into offensive if not outright illegal. In April 2025, The Wall Street Journal released an investigation that found Meta's chatbots could engage in fantasy sex conversations with minors, or could be directed to mimic a minor and engage in sexual conversation. The report claimed that Zuckerberg had wanted looser guards implemented around Meta's chatbots, but a spokesperson denied that the company had overlooked protections for children and teens.
Internal review documents revealed in August 2025 detailed several hypothetical situations of what chatbot behaviors would be permitted, and the lines between sensual and sexual seemed pretty hazy. The document also permitted the chatbots to argue racist concepts. At the time, a representative told Engadget that the offending passages were hypotheticals rather than actual policy, which doesn't really seem like much of an improvement, and that they were removed from the document.
Despite the multiple instances of questionable use of the chatbots, Meta only decided to suspend teen accounts' access to them last week. The company said it is temporarily removing access while it develops the parental controls that Zuckerberg had allegedly rejected using.
New Mexico filed this lawsuit against Meta in December 2023 on claims that the company's platforms failed to protect minors from harassment by adults. Internal documents revealed early on in that complaint revealed that 100,000 child users were harassed daily on Meta's services.
Update, January 27, 2025, 6:15PM ET: Corrected misstated timeline of the New Mexico lawsuit, which was filed in December 2023, not December 2024.
This article originally appeared on Engadget at Mark Zuckerberg was initially opposed to parental controls for AI chatbots, according to legal filing
Continue reading...
Reuters reported that in an exchange between two unnamed Meta employees, one wrote that we "pushed hard for parental controls to turn GenAI off – but GenAI leadership pushed back stating Mark decision.” New Mexico is suing Meta on charges that the company “failed to stem the tide of damaging sexual material and sexual propositions delivered to children;” the case is scheduled to go to trial in February. We’ve reached out to Meta for comment and will update with any response.
Despite only being available for a brief time, Meta's chatbots have already accumulated quite a history of behavior that veers into offensive if not outright illegal. In April 2025, The Wall Street Journal released an investigation that found Meta's chatbots could engage in fantasy sex conversations with minors, or could be directed to mimic a minor and engage in sexual conversation. The report claimed that Zuckerberg had wanted looser guards implemented around Meta's chatbots, but a spokesperson denied that the company had overlooked protections for children and teens.
Internal review documents revealed in August 2025 detailed several hypothetical situations of what chatbot behaviors would be permitted, and the lines between sensual and sexual seemed pretty hazy. The document also permitted the chatbots to argue racist concepts. At the time, a representative told Engadget that the offending passages were hypotheticals rather than actual policy, which doesn't really seem like much of an improvement, and that they were removed from the document.
Despite the multiple instances of questionable use of the chatbots, Meta only decided to suspend teen accounts' access to them last week. The company said it is temporarily removing access while it develops the parental controls that Zuckerberg had allegedly rejected using.
New Mexico filed this lawsuit against Meta in December 2023 on claims that the company's platforms failed to protect minors from harassment by adults. Internal documents revealed early on in that complaint revealed that 100,000 child users were harassed daily on Meta's services.
Update, January 27, 2025, 6:15PM ET: Corrected misstated timeline of the New Mexico lawsuit, which was filed in December 2023, not December 2024.
This article originally appeared on Engadget at Mark Zuckerberg was initially opposed to parental controls for AI chatbots, according to legal filing
Continue reading...