IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

California Privacy Watchdog Floats AI Consumer Protections

California’s Privacy Protection Agency has shared draft rules on how companies using automated decision-making tools — including those powered by artificial intelligence — can use consumers’ information.

The California Privacy Protection Agency (CPPA) has released draft rules that would govern how companies using automated decision-making tools — including those powered by artificial intelligence — can use consumers’ personal information.

The rules, which have yet to be adopted, would give California residents the right to grant or refuse access to their personal information for use in automated decision-making systems. Such systems construct personal profiles that companies can use to evaluate customer preferences and behavior, screen job applicants, or track worker performance, among other applications.

“Automated decision-making technologies and artificial intelligence have the potential to transform key aspects of our lives,” said Ashkan Soltani, CPPA executive director, in a statement. “We’re proud that California is meeting the moment by giving consumers more control over these technologies.”

Soltani said the agency’s board and the public will have opportunities to provide input on the proposed rules starting next month. The guidelines are meant to clarify how the 2020 California Consumer Privacy Act, which addressed a range of electronic and online uses of personal information, should apply to decision-making technology.

The proposal also outlines options for how consumers’ personal information could be protected when training AI models, which collect massive data sets in order to predict likely outcomes or respond to prompts with text, photo and video.

OpenAI and Google already have been sued over their use of personal information found on the Internet to train their AI products.

The proposed rules would require companies to inform people ahead of time how they use automated decision-making tools and let consumers opt in or out of having their private data used for such tools.

Automated technology — with or without the explicit use of AI — is already used in situations such as deciding whether somebody is extended a line of credit or approved for an apartment. Some early examples of the technology have been shown to unfairly factor race or socioeconomic status into decision-making — a problem sometimes known as “algorithmic bias” that regulators have so far struggled to rein in.

The actual rulemaking process could take until the end of next year, said Dominique Shelton Leipzig, an attorney and privacy law expert at the law firm Mayer Brown. She noted that in previous rounds of rulemaking by the state’s privacy body, little has changed from inception to implementation.

The proposed rules do pose one significant departure from existing state privacy rules, she said: Requiring companies to provide notice to consumers about when and why they are using automated decision-making tools is “pushing in the direction of companies being transparent and thoughtful about why they are using AI, and what the benefits are ... of taking that approach.”

The rules are not the state’s first run at creating privacy protections for automated decision-making tools.

One bill that did not make it through the state Legislature this year, authored by Assemblymember Rebecca Bauer-Kahan, D-Orinda, sought to guard against algorithmic bias in automated systems. It was ultimately held up in committee but could be reintroduced in 2024.

State Sen. Scott Wiener, D-San Francisco, has also introduced a bill that will be fleshed out next year to regulate the use of AI more broadly. That effort envisions testing AI models for safety and putting more responsibility on developers to ensure that their technology isn’t used for malicious purposes.

California Insurance Commissioner Ricardo Lara also issued guidelines last year on how artificial intelligence can and can’t be used to determine eligibility for insurance policies or the terms of coverage.

In an emailed statement, his office said it “recognizes algorithms and artificial intelligence are susceptible to the same biases and discrimination we have historically seen in insurance.”

The first hearing on the proposed rules was to be held Friday.

© 2023 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.