IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

New Bills Would Regulate AI Development, Criminalize Misuse

With less than a week remaining to introduce new bills, state lawmakers continue to release proposed legislation scrutinizing artificial intelligence.

California Capitol Building
As the days dwindle for state lawmakers to propose new laws in the hopes of making lasting change, several have recently filed AI-focused bills.

Friday is the deadline for elected officials in the California state Senate and Assembly to introduce new proposed legislation. Among recent bills are several seeking, generally, to hone in on the development of AI, its misuse and, potentially, on how consumers might be able to know whether text, images, audio or video have been generated by AI. Among the bills:

Senate Bill 1047, from Sen. Scott Wiener, D-San Francisco, would require developers of covered AI models to determine whether they can make a positive safety determination about the model before training it — defining “positive safety determination” to mean that the model is not a derivative model and that a developer can reasonably exclude the possibility the model has, or may come close to, a hazardous capability. Developers, before starting training of a non-derivative covered model, would have to meet requirements including being able to promptly fully shut down a covered model until a positive safety determination is reached. The bill would create the Frontier Model Division in the California Department of Technology and require developers of non-derivative covered models that don’t have a positive safety determination to get yearly compliance certificates from this division, signed by the chief technology officer or someone more senior. Developers would have to report each AI safety incident affecting a covered model to the division. The division would be able to assess related fees and would require their deposit into a new Frontier Model Division Programs Fund. The money in the fund would only be available via legislative appropriation. The bill would require a person who operates a computing cluster to put in place appropriate written policies and procedures for when a customer uses compute resources sufficient to train a covered model, including to assess whether a prospective customer plans to use the computing cluster to deploy a covered model. Violations would be punished by a civil penalty from the Attorney General. The bill would require CDT to commission consultants to create a public cloud computing cluster, to be known as CalCompute, to do research on the safe and secure deployment of large-scale AI models and on equitable innovation. The bill had its first reading Wednesday and may be acted upon on or after March 9.

SB 970, from state Sen. Angelique Ashby, D-Sacramento, would take aim at false impersonation via AI. It would define terms around AI and synthetic voice, video and image recordings produced by AI, clarifying that using synthetic recordings is a false impersonation. The bill would require the Judicial Council to develop and implement screening procedures to identify such synthetic records introduced as evidence. It would also require the council to develop and make available educational materials to help identify evidence that has been tampered with using AI. The bill would require any person or entity selling or giving access to any AI technology designed to create synthetic images, video, or voice to have a consumer warning that misusing this tech may make the person misusing it subject to civil or criminal liability. It would further require the California Department of Consumer Affairs to specify the format and content of the consumer warning and would impose a civil penalty for violations. The bill had its first reading Jan. 25 and may be acted upon on or after Feb. 25.

Assembly Bill 1873, from Assemblymember Kate Sanchez, R-Rancho Santa Margarita, would take aim at AI used in the sexual exploitation of minors. It would make it a misdemeanor or a felony to knowingly develop, duplicate, print or exchange any representation of information, data or image, generated by AI depicting someone under age 18 engaged in an act of sexual conduct. The bill defines AI as a “machine-based system that can, from a given set of human-defined objectives, make a virtual output.” The bill had its first reading Jan. 22 and may be heard in committee Feb. 22, but has not yet been assigned to committee.

SB 942, from Sen. Josh Becker, D-San Mateo, is an intent bill, stating the intent of the Legislature to pass a bill establishing a mechanism that would give consumers an easy way to determine whether images, audio, video or text were created by AI. The bill was given its first reading Jan. 17 and sent to the Senate Rules Committee for further assignment, but does not yet appear to have been assigned. It may be acted upon on or after Feb. 17.
Theo Douglas is Assistant Managing Editor of Industry Insider — California.