
California led the way in 2024 with substantial progress in this space: Sen. Bill Dodd’s Artificial Intelligence Accountability Act, for example, requires the Office of Emergency Services to “perform a risk analysis of potential threats posed by the use of generative artificial intelligence to California’s critical infrastructure, including those that could lead to mass casualty events.”
Sen. Josh Becker’s California Artificial Intelligence Transparency Act helps users understand if content they are viewing is AI-generated by requiring large generative AI companies to label such content, including problematic, deceptive or deepfake content, and providing consumers with an AI-detection tool when such a label isn’t available. Legislative efforts that did not make it into law — either because they did not make it to the governor’s desk or were vetoed — give us some insight into the next steps to take in the 2025-26 legislative session. Between my colleagues in the state Assembly and Senate, it is generally agreed that all new AI efforts should be processed through a lens of harmonizing our efforts with international partners like the European Union. With the EU rolling out its AI Act, and the related General-Purpose AI Code of Practice, my colleagues and I are studying Europe’s approach to identify efficiencies and mirror their regulatory efforts.
MISINFORMATION, METRICS, UNIFORM STANDARDS
As a state, first and foremost, we must combat misinformation. As generative AI capabilities grow, the threat of misinformation and disinformation intensifies. Last year’s legislation on synthetic content detection laid a foundation, but further steps are needed to address this issue in an era of advanced generative systems. Second, we need to refine the metrics we use for regulation. The state must shift from focusing on technological metrics such as floating-point operations per second (one measure of a computer’s performance) and the regulation of the technology itself to harm-based approaches that target specific use cases. This ensures flexibility and avoids stifling innovation.
Next, it’s essential that we establish uniform standards. Defining critical terms like “artificial intelligence” and “automated decision systems” has been a vital step in creating clarity amid the proliferation of AI legislation. Precise, uniform definitions enable effective regulation and enforcement. It also reduces compliance costs given that many companies will already have to comply with established international standards and laws.
STRIKING A BALANCE
Finally, it’s crucial that we find a balance between innovation and safeguards. California will continue to emphasize targeted, case-specific regulations rather than sweeping measures. This approach aligns with Gov. Gavin Newsom’s commitment to responsible innovation and ensures that AI’s potential to transform how businesses and governments do their jobs is realized without unnecessary risks or barriers. We must establish clearly defined guardrails that achieve our goals but that are narrowly tailored to avoid unnecessarily high compliance costs. Thus far, California has enacted 17 AI-related laws, making it the state with the most significant AI legislative program in the country. By balancing the utilization of AI’s potential and protecting the public from its perils, this legislative endeavor guarantees that California will continue to lead the way in AI innovation and regulation. Thoughtful regulation is not a barrier, but a pathway to responsible innovation, ensuring that AI serves humanity’s best interests for the future.
This commentary first appeared in the Sacramento Bee.