Not everyone seems to be a fan of the wild west atmosphere surrounding generative AI, especially where it concerns the safety of so-called “companion chatbots.” Legislation coming out of the Senate would create new rules and audit requirements for chatbot operators.
The California Senate this week passed Senate Bill 243, authored by Sen. Steve Padilla, D-Chula Vista, that would outline new requirements for the technology around the use of randomized incentives to keep users engaged and protocols for identifying “suicidal ideation, suicide or self-harm expressed by a user to the companion chatbot.”
Legislation signed in 2020 created the state’s Office of Suicide Prevention within the California Department of Public Health. This new legislation, if successful, would require chatbot operators to submit annual reports regarding the number of times users expressed suicidal ideations. Data from those reports would be published online, according to the legislation.
In addition to the reporting component of this bill, private-sector operators would need to submit “regular” third-party audits to ensure they comply with the law, and operators would also be required to disclose whether their technology is suitable for use by minors and give users a notification after three hours of continued interaction with the chatbot platform.
The legislation was passed in the Senate in a 28-4 vote and sent to the Assembly for review June 4.
Senate Sends Chatbot Safety, Audit Bill to Assembly
What to Know- Senate Bill 243 would create new reporting and audit requirements for operators of so-called "companion chatbots."
- The legislation, authored by Sen. Steve Padilla, was approved in a 28-4 vote and sent to the Assembly for review June 4.
