IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Child-Safety Chatbot Bill Clears Senate

Legislation that would establish new guardrails for companies around so-called companion systems has cleared the Senate. Opponents say the bill could have unintended consequences for innovation and support devices.

California Capitol building on a partly cloudy day.
The California Senate has passed legislation that would set new requirements for artificial intelligence systems that interact with children — specifically those systems geared as “companions.”

Those systems are described in Assembly Bill 1064 as “a generative artificial intelligence system with a natural language interface that simulates a sustained humanlike relationship with a user.” The bill was authored by Assemblymember Rebecca Bauer-Kahan, D-San Ramon.

The bill would, among other things, prohibit a company from making its technology available to a child unless it is not “foreseeably capable” of encouraging self-harm, violence and other illegal and harmful actions; sexually explicit activities; and prioritizing the user’s beliefs and preferences above factual accuracy or the child’s safety, among other requirements.

According to the legislation — which is also called the Leading Ethical AI Development (LEAD) for Kids Act — these “products are designed to exploit children’s psychological vulnerabilities, including their innate drive for attachment, tendency to anthropomorphize humanlike technologies and limited ability to distinguish between simulated and authentic human interactions.”

While the phrase “companion chatbot” may seem very specific, the legislation paints a broader picture that raises questions about where exactly the line between popular chatbot models and child-specific models lies.

The legislation could mean changes to models that aren’t currently identified as companion systems, and would include any system that retains information from prior interactions; asks unprompted or unsolicited “emotion-based” questions; and sustains an ongoing personal dialogue. Systems exempt from this proposed law would include those used strictly for customer or commercial services, those marketed as efficiency tools, and any used by a business for internal purposes only.

Aodhan Downey, with the Computer and Communications Industry Association, voiced opposition to the legislation during a Senate Committee on Judiciary hearing in July, saying that while his organization believes in protecting children online, the bill could have unintended consequences, like chilling innovation or access to supportive technologies.

“This framing could unintentionally block children and teens from accessing AI tools that offer real benefits. This prohibition will negatively impact youth. For example, AI is being used to personalize tutoring, support pediatric health care and even help youth in crisis,” Downey said.

Downey went on to say that the bill’s language was overly broad and could impact tools like troubleshooting chatbots.

Robby Torney, senior director of AI programs at the nonprofit Common Sense Media, spoke in favor of the legislation during the July committee hearing, citing research into the harmful impacts chatbot technology has already had on teens and children. To date, there have been several high-profile cases in which teenagers committed suicide following interactions with companion systems.

During a Senate Committee on Appropriations hearing Aug. 18, the Department of Finance voiced financial concerns about the bill, outlining that the legislation would create new general fund pressures not accounted for in the 2025-26 budget.

“Dependent on the appropriation of funds, the Department of Justice will need $280,000 for the first year and $507,000 for ongoing years to implement the directives of this bill,” Deputy Director of Legislation Christian Beltran said in his testimony. “These funds will be used to onboard one deputy attorney general, one legal secretary and use consulting services to address the increased workload.”

Assembly Bill 1064 was approved by the state Senate Sept. 10 and will now head to the governor for a signature once both houses reconcile changes. While it’s unclear whether or not Gov. Gavin Newsom will sign the bill into law, his administration has been aggressive with its approach to generative AI use and associated guardrails.
Eyragon is the Managing Editor for Industry Insider — California. He previously served as the Daily News Editor for Government Technology. He lives in Sacramento, Calif.