IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Will California’s Proposed AI Models Act Stifle Innovation?

Technology execs are critical of state Sen. Scott Wiener’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

An outline of a human head in profile with a circuit board inside in cyan. Dark to light blue gradient background.
Shutterstock
Garry Tan, the CEO of startup Incubator Y Combinator, has been intensely critical of efforts to regulate artificial intelligence, despite many tech titans calling for controls on the technology. Just last month Tan said on X that state Sen. Scott Wiener’s marquee bill to guard against runaway AI technology would have “an intense chilling effect for AI in California.”

Garry Tan.jpg
Garry Tan
That didn’t stop Tan and Y Combinator from inviting Wiener and a bevy of federal officials to an event at the incubator’s San Francisco headquarters to hash out what reining in the technology should look like. Also in attendance were U.S. Federal Trade Commission Chair Lina Khan and U.S. Department of Justice antitrust attorney Jonathan Kanter, underscoring the growing national concern over whether, and how, to regulate some of the most valuable and fastest-growing companies in the U.S.

Despite Tan sitting in the front row as Wiener spoke onstage, the two did not engage in an open debate on the most talked-about piece of AI legislation in California, and possibly the U.S.

But a surprise speaker said Wiener’s bill missed the mark. Andrew Ng, an AI luminary and the co-founder and head of Google Brain, compared requiring AI developers to ensure their programs can’t be used for ill to asking electric motor makers to ensuring their products aren’t used illegally.

“I would stop making electric motors,” Ng said.

For his part, Wiener said he was still open to amendments from the tech community, including some proposed this week by San Francisco-based Anthropic, a maker of AI models.

“AI is incredibly exciting,” Wiener said. But “with any powerful and transformative technology, there are always going to be risks.” His bill tries to address those unknowns by requiring makers of future AI programs to safety-test them.

Startup founder Jeremy Nixon, the CEO of Omniscience AI, asked Wiener whether he was concerned that the bill could stifle the future release of AI models — something he said has happened in the European Union after the imposition of regulations there.

Wiener said he thought that was unlikely.

“This bill is so much more narrow than the European Union AI law,” he said. Facebook parent Meta has said it won’t release future models in the EU because of the rules.

Asked afterwards whether Wiener’s words had changed his mind, Tan said he still disagreed with the senator on the AI bill, expressing concern it would make it harder for startup founders to build new AI tools through Y Combinator.

Much of the tech world, from venture capitalists to Meta, has criticized Wiener’s bill, saying it would leave companies on the hook if a third party did anything harmful with the technology they built.

Entrepreneur and venture capitalist Marc Andreessen has gone so far as to suggest that regulating open-source companies could result in global warfare, while other tech luminaries have said that bombs might not fall but the multibillion-dollar AI industry could be pushed out of California.

The argument from tech investment firm Andreessen Horowitz and elsewhere is that companies such as Meta would stop building so-called open-source AI models that startups can repurpose for free. If a third party took their technology and used it to break the law — by designing a weapon with it, or using it to hack into private systems — the Metas of the world could bear the responsibility, the argument goes.

Wiener has said he’s listened closely to those critiques, and made numerous amendments to the bill to avoid unfairly punishing large companies such as Meta and OpenAI that make the largest and most powerful AI models.

“Our goal here is not to shut down open source,” Wiener said. “If the model is no longer in your possession, you are not responsible for ensuring it can be shut down.”

The bill covers only large AI companies and models of a size that doesn’t yet exist. Wiener’s bill includes amendments that would limit liability of big model makers such as Meta if their technology is changed significantly and used to do harm.

Asked during the event about Wiener’s bill, Khan, the FTC chair, declined to comment on it directly. But in general, she said protecting openness in the AI industry was important, in part because of the billions of dollars it takes to build an AI model from scratch.

(c)2024 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.