Several significant proposed AI and IT bills are already “in suspense” and not moving forward. State Senate Bill 313 would have created the Office of Artificial Intelligence within the California Department of Technology (CDT) and empowered it to ensure a consistency or standardization as AI systems are created at the state. SB 721 would have created the California Interagency AI Working Group to keep the Legislature informed on AI, study its implications on data collection, determine which agencies should develop and oversee AI policy, and take “proactive steps” to guard against AI-assisted misinformation campaigns. Assembly Bill 302 would have required CDT by Sept. 1, 2024, to do a “comprehensive inventory of all high-risk automated decision systems” that are in use, development or procurement by state agencies. CDT would also have had to report on this to the Legislature.
Appropriations committees in the state Senate and Assembly will hold final suspense hearings May 18, preceded Monday in the state Senate and Wednesday in the Assembly by “regular order hearings” during which more active bills could still be placed in suspense ahead of final consideration. The annual culling of costly legislation typically ends with scores of proposed laws no longer under consideration. Newsom’s “May revise” of the proposed 2023-2024 Fiscal Year state budget he released Jan. 10 arrives in less than two days. That budget, as Industry Insider – California reported in January, anticipated a $22.5 billion shortfall; but as The Sacramento Bee recently reported, state revenue as of March came in $4.7 billion below Newsom’s forecast. Among the takeaways:
- State Senate Bill 398, from Sen. Aisha Wahab, D-Hayward, would enact the Artificial Intelligence for California Research Act, requiring CDT, with legislative appropriation, to “develop and implement a comprehensive research plan to study the feasibility of using advanced technology to improve state and local government services.” That plan would have to contain an analysis of AI’s possible risks and benefits in government services and would mandate that CDT report this back to the Legislature by Jan. 1, 2026. The areas of analysis would include the examination of “virtual assistants powered by an AI language system” to assist unemployment and disability insurance claimants; of a rental assistance chatbot; of AI in assisting disaster victims in applying for relief funds; and of AI’s use in public records requests. The bill was re-referred April 18 to the Senate Committee on Governmental Organization; a hearing date has not yet been set.
- State Assembly Bill 1667, from Assemblymember Jacqui Irwin, D-Thousand Oaks, would create the California Cybersecurity Awareness and Education Council at CDT. The council of 15, to be appointed by Feb. 1, would research “ways to increase cybersecurity awareness and education of students, families and other adults,” to help people “learn and use healthy cybersecurity practices,” and create a larger, more diverse workforce with cybersecurity training. It would also propose a strategy to engage residents in its “effort to improve cybersecurity practices and strengthen cyber infrastructure.” The council would report by July 1, 2024, on “approaches the state can take to raise awareness of and increase education regarding cybersecurity,” including in K-12 schools, higher education and the workplace, and ways to use “social media, marketing campaigns, and the news media” effectively to boost cybersecurity awareness. It was referred April 26 to the Assembly Committee on Appropriations, but no hearing date has been set.
- AB 331 on automated decision tools, from Assemblymember Rebecca Bauer-Kahan, D-Orinda, would require deployers or developers of automated decision tools, by Jan. 1, 2025, and yearly afterward, do an impact assessment for any such tool. The assessment would have to have a statement of the tool’s purpose, intended benefits and uses. And the developer or deployer would have to provide that assessment to the Civil Rights Department within 60 days of it being finished, with violators facing potential fines of up to $10,000. “Certain public attorneys” including the state attorney general, would be able to take civil action against violators. Deployers would have to notify anyone who is the subject of a “consequential decision” by an automated tool or in which a tool is a “controlling factor”; and if such a decision is made “solely based on the output of an automated decision tool” would enable people to opt out of being subject to the tool’s decision. The bill was referred April 20 to the Assembly Committee on Appropriations, but no hearing date has been set.
- AB 749, also from Irwin, would require state agencies by Jan. 1, 2025, to stand up “specified actions” around “data, hardware, software, internal systems and essential third-party software, including multifactor authentication for access to all systems and data” owned, managed, maintained or utilized by or on behalf of the agency. The agencies would also have to implement a “zero trust architecture ... and prioritize” using solutions that either comply with, are authorized by or align to “federal guidelines, programs and frameworks.” The chief of the Office of Information Security in CDT would have until Jan. 1, 2024, to “develop uniform technology policies, standard and procedures” to be used by all state agencies, around “zero trust architecture and architecture, including multifactor authentication” on all systems in the State Administrative and Statewide Information Management manuals. The bill was re-referred April 26 to Appropriations, where it was set to be considered Wednesday.
- Assembly Joint Resolution 6, introduced May 4 by Assemblymember Bill Essayli, R-Norco, would “urge” the federal government to immediately impose a “moratorium on the training of AI systems more powerful than GPT-4 (Generative Pre-trained Transformer)” for at least six months to “allow time to develop much needed AI governance systems.”
- Senate Concurrent Resolution 17, from Sen. Bill Dodd, D-Napa, would affirm the Legislature’s “commitment to President Biden’s vision for a safe AI and the principles outlined in the Blueprint for an AI Bill of Rights,” and would express its commitment to “examining and implementing those principles in its legislation and policies related to the use and deployment of automated systems.” It has been referred to the Assembly Committee on Privacy and Consumer Protection; no hearing date has been set.