IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Commentary: With AI Executive Order Rescinded, California Must Lead on AI Regulation

“It’s crucial that we find a balance between innovation and safeguards,” writes state Sen. Thomas Umberg. “California will continue to emphasize targeted, case-specific regulations rather than sweeping measures.”

A person holding out their hands with a symbol of a set of scales hovering above their palms. On the left side is an outline of a head that says "AI" in it and on the right is an outline of a head with a drawing of a brain inside it. Dark background.
As home to 70 percent of the world’s leading artificial intelligence companies, California is positioned at the forefront of innovation. With that position comes a responsibility to take the lead on sensible AI regulation. Moving forward, California must continue to lead on AI advancements, but we should also follow the example set by the European Union, which recently passed the AI Act, the world’s first comprehensive AI law.

Thomas Umberg.
State Sen. Thomas Umberg
On Tuesday, however, President Donald Trump rescinded former President Joe Biden’s AI Executive Order, which aimed to reduce risks to consumers, workers and national security posed by AI technology. Now, with a divided Congress, little can be expected of Washington on this front in the foreseeable future. Working cooperatively with the European Union, California must step up to set a model that will guide international technology conglomerates and protect consumers.

California led the way in 2024 with substantial progress in this space: Sen. Bill Dodd’s Artificial Intelligence Accountability Act, for example, requires the Office of Emergency Services to “perform a risk analysis of potential threats posed by the use of generative artificial intelligence to California’s critical infrastructure, including those that could lead to mass casualty events.”

Sen. Josh Becker’s California Artificial Intelligence Transparency Act helps users understand if content they are viewing is AI-generated by requiring large generative AI companies to label such content, including problematic, deceptive or deepfake content, and providing consumers with an AI-detection tool when such a label isn’t available. Legislative efforts that did not make it into law — either because they did not make it to the governor’s desk or were vetoed — give us some insight into the next steps to take in the 2025-26 legislative session. Between my colleagues in the state Assembly and Senate, it is generally agreed that all new AI efforts should be processed through a lens of harmonizing our efforts with international partners like the European Union. With the EU rolling out its AI Act, and the related General-Purpose AI Code of Practice, my colleagues and I are studying Europe’s approach to identify efficiencies and mirror their regulatory efforts.

MISINFORMATION, METRICS, UNIFORM STANDARDS


As a state, first and foremost, we must combat misinformation. As generative AI capabilities grow, the threat of misinformation and disinformation intensifies. Last year’s legislation on synthetic content detection laid a foundation, but further steps are needed to address this issue in an era of advanced generative systems. Second, we need to refine the metrics we use for regulation. The state must shift from focusing on technological metrics such as floating-point operations per second (one measure of a computer’s performance) and the regulation of the technology itself to harm-based approaches that target specific use cases. This ensures flexibility and avoids stifling innovation.

Next, it’s essential that we establish uniform standards. Defining critical terms like “artificial intelligence” and “automated decision systems” has been a vital step in creating clarity amid the proliferation of AI legislation. Precise, uniform definitions enable effective regulation and enforcement. It also reduces compliance costs given that many companies will already have to comply with established international standards and laws.

STRIKING A BALANCE


Finally, it’s crucial that we find a balance between innovation and safeguards. California will continue to emphasize targeted, case-specific regulations rather than sweeping measures. This approach aligns with Gov. Gavin Newsom’s commitment to responsible innovation and ensures that AI’s potential to transform how businesses and governments do their jobs is realized without unnecessary risks or barriers. We must establish clearly defined guardrails that achieve our goals but that are narrowly tailored to avoid unnecessarily high compliance costs. Thus far, California has enacted 17 AI-related laws, making it the state with the most significant AI legislative program in the country. By balancing the utilization of AI’s potential and protecting the public from its perils, this legislative endeavor guarantees that California will continue to lead the way in AI innovation and regulation. Thoughtful regulation is not a barrier, but a pathway to responsible innovation, ensuring that AI serves humanity’s best interests for the future.

This commentary first appeared in the Sacramento Bee.
State Sen. Thomas J. Umberg represents the 34th Senate District, which includes the cities of Anaheim, Buena Park, Fullerton, Garden Grove, La Habra, Long Beach, Orange, Placentia, Santa Ana and East and South Whittier. Umberg is a retired U.S. Army colonel, former federal prosecutor and small businessman.