Gov. Gavin Newsom this week signed a small fleet of bills into law setting new limits on the use of artificial intelligence in the political theater.
Newsom signed the legislation during a fireside discussion with Salesforce CEO Marc Benioff at the Dreamforce Conference on Tuesday in San Francisco. During that conversation Newsom said that of the 991 bills on his desk, around 38 had to do with the AI space.
Assembly bills 2655, 2839, and 2355 take two approaches to the use of AI in the political realm — the first targets deepfake content on social media platforms, the second establishes a more restrictive timeline for distribution of “deceptive” AI-generated content, while the third would require disclosures on this sort of material.
“There are a lot of deepfakes out there, there is not a lot of disclosure, there is not a lot of labeling,” he said.
“That is now injunctive relief if you do any of those deepfake election misrepresentations,” he added after signing the bills.
AB 2655, also known as the Defending Democracy from Deepfake Deception Act of 2024, piles new responsibility on social media platforms “to block the posting of materially deceptive content related to elections in California, during specified periods before and after an election.”
The legislation would also require large online platforms to create a process for reporting content, while also offering an avenue for legal action against platforms that are not in compliance.
“The bill would also authorize candidates for elected office, elected officials, elections officials, the attorney general, and a district attorney or city attorney to seek injunctive relief against a large online platform for noncompliance with the act, as specified, and would assign precedence to such actions when they are filed in court,” the bill says.
The legislation added more fuel to the fiery feud between Newsom and billionaire entrepreneur Elon Musk, who tweeted that the governor was essentially making “parody illegal” with the new restrictions.
In a similar spirit, AB 2839 expands existing law that prohibits “deceptive audio or visual media of a candidate within 60 days of an election,” doubling that time period to 120 days in advance of an election and 60 days after an election in specified cases.
“This bill would prohibit a person, committee, or other entity from knowingly distributing an advertisement or other election communication, as defined, that contains certain materially deceptive content, as defined, with malice, as defined, subject to specified exemptions. The bill would apply this prohibition within 120 days of an election in California and, in specified cases, 60 days after an election,” the bill analysis reads.
Like AB 2655, it would also come with legal teeth: “The bill would authorize a recipient of materially deceptive content distributed in violation of this section, candidate or committee participating in the election, or elections official, as defined, to file a civil action to enjoin the distribution of the media and to seek damages against the person, committee, or other entity that distributed it, except as specified. The bill would require a court to place such proceedings on the calendar in the order of their date of filing and give the proceedings precedence.”
And finally, AB 2355 sets new disclosure requirements for the use of AI in political advertisements. Committees that create, publish and distribute political advertisements in the form of images, audio, video and other media would be required to disclose that the “advertisement was generated or substantially altered using artificial intelligence,” the bill analysis reads.
The Fair Political Practices Commission would oversee adherence to these new rules.