Some AI companies asked for regulation. Now that it’s coming, some are furious.
Since last year, tech titans from Mark Zuckerberg to OpenAI CEO Sam Altman have gone to Congress to discuss artificial intelligence regulation, warned of the technology’s potentially catastrophic effects, and even asked to be regulated.
California legislators have responded with bills intended at outlawing unfair bias by AI decision-making programs, trying to mitigate its effects on elections through false and misleading information, and demanding insight into how models are trained, among more than a dozen other proposed pieces of legislation.
“As we’ve seen with the creation and expansion of the internet and social media, we cannot count on this industry to self-regulate,” said Teri Olle, director of Economic Security California Action, which co-sponsored one of the bills. “They simply won’t put public interest above their profits — as they have proven time and again.”
But now that proposed rules are moving forward, companies are crying foul, saying that holding them liable for the downstream uses of the technology they build will stifle innovation and force a multibillion-dollar industry out of California altogether.
Nowhere is the pushback clearer than in a letter sent by Meta, the parent company of Facebook, to state Sen. Scott Wiener, D-San Francisco, recently protesting his marquee effort to place some liability on big AI developers, SB1047.
The bill would focus on future versions of big AI programs — those that cost $100 million or more to train and of a size that do not yet exist — and would allow the state attorney general to sue developers if their models cause mass havoc. The bill does not give private citizens the right to sue AI companies. It would also require safety testing of models to prevent foreseeable harms like their being used to create biological weapons or knock out the power grid, for example.
“If a company decides to impose such a severe risk on the public, without good justification, it should be prepared to take responsibility for the consequences,” said Nathan Calvin, senior counsel at the Center for AI Safety’s political action fund, a sponsor of the bill, in an email. “That’s an incredibly fair and reasonable thing to ask of big tech companies like Meta,” Calvin said.
But Meta in its letter said Wiener’s bill places “disproportionate obligations on model developers” because they could be on the hook for how someone else uses their technology. Meta is the maker of the Llama family of AI programs, which use an “open source” approach that allows any company or developer to repurpose them for free as a chatbot or for other uses.
Another powerful opponent of the bill, Silicon Valley venture capital firm Andreessen Horowitz, has launched a website warning of what it calls the potential harms of SB1047, from chilling investment to crushing the startup ecosystem based on open source models.
Andreessen Horowitz partner Anjney Midha said in a published interview that proving a program is totally safe would be impossible and would expose companies and developers to huge risk when the bill lacked clear definitions of what would violate its provisions.
Firm founder Marc Andreessen himself recently sat on stage at a Stanford AI event decrying Wiener’s bill, and separately saying that the only way to truly regulate AI programs and open source in particular would be to do it worldwide, with the potential to start a war in the process.
Wiener said the goal is not to regulate every AI model everywhere, but only large ones that will be developed in the future. The liability created under the bill is extremely narrow, he said, adding it is designed to prevent the most catastrophic uses of AI. These are outcomes, he said, for which developers would probably face legal liability anyway, but he is trying to prevent that from happening in the first place with safety testing.
He also said Andreessen Horowitz had pushed “disinformation” about the bill including that developers could be thrown in jail, which he called “completely false,” saying that if a company could face criminal liability only if it lied about safety testing.
Despite the focus on large AI models and companies, smaller companies, such as San Diego’s Benchmark Labs, which makes AI-powered weather forecasting technology, are concerned they could face liability in the future as their models grow. CEO Carlos Gaitan said during a news conference that his company is not currently covered under Wiener’s bill, but as the size and complexity of his weather models increase it could face liability.
“If an arsonist receives my forecast and decides, God forbid, to start a fire, I would be liable for that,” Gaitan said.
From Wiener’s perspective, his bill doesn’t apply to any existing program and would regulate only large future programs. AI developers signed a White House pledge to develop the technology safely last year and, he said, he’s just asking them to keep their word. “It can’t just be opaque voluntary compliance,” Wiener said.
“Let’s do the safety evaluation up front,” to avoid those situations from occurring at all, he said.
Meta and Andreessen Horowitz both said the bill, if it became law, would push companies out of California to avoid being held to its provisions.
Wiener said he has met with representatives from both companies and others in the tech industry and has made amendments to the bill, including clarifying in what instances a company would no longer be liable if a developer sufficiently changed the elements of its program to cause harm.
Threats to leave the state are a straw man, he said, because the bill covers any companies doing business in California, not just those based there. Companies made similar threats when he passed state data privacy and other legislation but they didn’t follow through, Wiener said.
Whether Meta or other companies are responsible for what is done with their products echoes the debate about whether social media companies should be liable for illegal things done on their sites. For the most part, they aren’t. But technology shouldn’t be treated any differently when it causes harm than a car company would be if its seat belts failed, said Ahmed Banafa, a professor at San Jose State University.
In sending the letter, Meta signaled that it is “trying to avoid spending more time and money on testing” of its models, he said. In an analogous case, when a car is defective and causes an injury, the maker, not the driver, is liable.
But Meta is saying “it’s not our responsibility if someone else is going to use it in a harmful way,” Banafa said.
In a sign of the divide between the two sides, Midha of Andreessen Horowitz also brought up the car example. Except in his view, Wiener’s bill amounted to “holding car manufacturers liable for every accident caused by a driver who’s modified their car.”
Wiener brushed aside some of the objections brought up by Andreessen Horowitz as “extreme and melodramatic,” calling his bill “light-touch regulation” that doesn’t ban anything or require a license to train the big AI models of the future.
“We’re simply requiring that large labs do the safety testing that they have publicly committed to doing,” he said.
(c)2024 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.