California has taken one more step toward regulating the booming AI industry, this time with a broad strokes bill from state Sen. Scott Wiener, D-San Francisco, aiming to regulate how the technology is built and its effects on Californians.
The bill envisions creating a state agency or tasking an existing agency with guiding and regulating responsible development of the technology, while also putting some onus on developers to ensure their technology isn’t used for malicious purposes. The legislation would also require companies working on AI models to test them for safety risks, and inform the state how they would mitigate those risks when problems arise.
A state version of a national research cloud, called CalCompute, is also outlined in the bill and could “help ensure that California plays a globally central role in the rigorous evaluation and development of AI systems.”
The so-called “intent bill” introduced at the end of the current session will not move through the Legislature this year, but is intended to generate discussion for a future legislative push.
“Large-scale AI presents a range of opportunities and challenges for California, and we need to get ahead of them and not play catch up when it may be too late,” Wiener said in a statement.
“As a society, we made a mistake by allowing social media to become widely adopted without first evaluating the risks and putting guardrails in place. Repeating the same mistake around AI would be far more costly,” he added, noting the technology also had the potential to change people’s lives for the better.
Editor’s note: The proposed law, state Senate Bill 294, made it as far as the Senate Rules Committee before being withdrawn.
The announcement comes a week after Gov. Gavin Newsom signed an executive order on AI that would study the risks and potential benefits of using AI — including within the state government — with an eye to avoiding the technology’s potential to amplify bias.
AI chatbots and other products are largely trained on large language data sets scraped from the Internet, which can inadvertently instill the biases and attitudes of the open web into how they answer questions.
Other AI products can be used to decide whether a person is extended a line of credit or is approved for an apartment instantly, but can also have the potential to unfairly factor race or socioeconomic status into its decision-making.
Another bill from this session authored by Assemblymember Rebecca Bauer-Kahan, D-Orinda, is seeking to regulate that kind of algorithmic bias when it comes to so-called “automated decision-making tools.”
There is also a danger of chatbots spreading misinformation with their answers to prompts, or instructing users on how to perform dangerous or illegal activities.
Editor’s note: The legislation, Assembly Bill 331, was held in submission and did not move beyond committee this session.
The framework legislation comes as there has been some early bipartisan movement in Washington toward regulating the AI industry, but as yet no proposed laws.
President Joe Biden is working on an executive order aimed at regulating the industry, and on Tuesday another group of leading AI companies signed a pledge at the request of the White House to responsibly develop the powerful new technology.
The Senate Judiciary Committee has also been holding hearings on Capitol Hill with experts aimed at eventually creating legislation, and the two leading senators on the committee, Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., have set out a bipartisan AI framework.
(c)2023 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.