Much of the debate about whether the government should regulate artificial intelligence has centered on Congress, where top AI voices have testified in highly publicized hearings. But with a gridlocked Congress, some lawmakers and tech experts see the much more agile California Legislature as a key player in the debate. Gov. Gavin Newsom told The Chronicle he’s also starting to focus on the issue.
Newsom said he’s taking a “deep dive” into artificial intelligence now that he’s wrapped up budget negotiations for the year. In an interview at the end of last month, he said he spent his day in San Francisco getting briefed on the technology “from the source.” He didn’t specify whom he met with, but San Francisco is home to many emerging technology companies, including OpenAI, the creator of ChatGPT.
California has led the push to regulate other areas of the tech industry, a role Golden State lawmakers often relish. For example, California’s 2018 Internet privacy law, which requires websites to let California residents opt out of the collection and sale of their data, has set standards for websites that operate across the country and inspired similar legislation in other states.
“Just as in the case of privacy, California could potentially lead on this front as well,” said Daniel Ho, a law and politics professor at Stanford who advises the Biden administration on artificial intelligence.
Interest in AI has skyrocketed since San Francisco-based OpenAI launched its text-generation software ChatGPT and its image-generating counterpart, DALL-E. So far, much of the conversation around regulating the new technology has centered on Washington, where OpenAI CEO Sam Altman testified before Congress in May that AI could be dangerous and should be regulated. Senate Majority Leader Chuck Schumer, D-New York, is working to develop regulations for the industry. Last month, President Joe Biden met privately with Newsom and top AI experts in San Francisco to discuss possible regulation.
But with a politically divided Congress, passing any legislation is a challenge, and regulating an industry as new and complex as AI would be monumentally difficult. California lawmakers, on the other hand, advance hundreds of bills every year out of the Capitol in Sacramento, where Democrats hold supermajorities in both chambers and often tackle highly controversial issues, from oil drilling to police accountability.
Assembly Member Rebecca Bauer-Kahan, D-Orinda, who introduced the first major California bill to regulate AI, said that while the Biden administration has released some good standards for technology and artificial intelligence, they aren’t enforceable and she thinks it would be very difficult for Congress to codify them in law.
Since Bauer-Kahan’s bill was shelved in May, the Legislature isn’t weighing any bills to significantly regulate the industry. But that could change as the technology continues to dominate conversation in powerful labor and tech circles in California.
AI creates a host of concerns for labor unions, a powerful force in California politics. They include employers replacing workers with AI and using writers’ and artists’ creative works to generate content, said Lorena Gonzalez Fletcher, a former San Diego lawmaker who now leads the California Labor Federation. Gonzalez Fletcher says California needs to step up to regulate the industry.
In the absence of federal or state regulation, Google and Open AI are facing class-action lawsuits that allege they illegally used copyrighted material and personal information to train their AI chatbots. The companies have denied these claims.
Some leaders in artificial intelligence, including Altman of OpenAI, have called for the federal government to regulate their industry, a plea Gonzalez Fletcher said she’s skeptical of, noting that a push for federal action might reflect a concern that any regulations coming out of California could be much stronger. Weak regulations could legitimize a dangerous business and give consumers an illusion of safety without meaningful protections, she said.
Bauer-Kahan’s bill would have banned companies from using AI-powered algorithms that discriminate against people. To accomplish that, the bill would have required companies that develop those algorithms to assess them and document their intended uses, limitations and potential discriminatory risks.
Businesses typically don’t want to be subject to different regulations in each state, so efforts by individual states to pass their own laws could spur the companies to push for a uniform federal law, Ho said.
Bauer-Kahan’s bill wasn’t nearly as comprehensive as the regulations being advanced by the European Union, which would require companies to disclose information about how their AI systems work and limit the use of facial recognition software.
But experts say the issue she’s trying to tackle is important. Ho pointed to algorithm discrimination as one of the biggest concerns he and other experts have raised about the proliferation of artificial intelligence. Although discrimination is already illegal, enforcing existing anti-discrimination laws with respect to new artificial intelligence technology is a challenge because state government has difficulty recruiting qualified experts.
In California, human resources software company Workday is leading a push for regulation. Chandler Morse, Workday’s vice president of public policy, said regulation is important for companies like his so the public can trust their technology.
Complex policies can take years to shape in Sacramento, but Morse said that might be too late when it comes to AI regulation.
“I don’t think we have a couple years. In our view, the sooner the better,” he said. “California is in a pretty good position to be that state that acts, and we hope they don’t miss this opportunity.”
Assembly Member Bill Essayli, R-Corona, has introduced a resolution calling for a pause on new AI technology while the government can develop new laws. If passed, it wouldn’t include any enforceable actions, but would instead call on the federal government to act.
Essayli said he’s concerned about the potential dangers of AI, including that people won’t be able to tell the difference between real information from politicians and artificially generated videos. The issue has already come up in the Republican presidential primary, where Florida Gov. Ron DeSantis and former President Donald Trump have used AI-generated images to attack each other.
“When the genie is out of the bottle, it’s very hard to put it back in,” Essayli said.
(c)2023 The San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.