IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

State Agency Planning to Use AI to Answer Tax Questions

The California Department of Tax and Fee Administration plans to use generative artificial intelligence to advise its approximately 375 call center agents on state tax code. The AI will then inform what they pass on to California business owners asking for tax guidance.

A state tax agency wants to use generative AI to give business owners tax advice. The state of California calls it an opportunity. Risk assessments are forthcoming.

A constant hum fills the air at California’s tax agency this time of year — phones ring and keyboards clack as hundreds of thousands of Californians and businesses seek tax guidance.

Call volume quadruples to up to 10,000 a day; average wait times soar from four minutes to 20.

“Right from the get-go when the bell rings at 7:30, you [already] have a wait,” said call center chief Thor Dunn, adding that workers with other jobs are trained to hop on the phones during peak periods. “All hands on deck.”

So later this year — for next tax season — California’s 3,696-person Department of Tax and Fee Administration plans to use generative artificial intelligence to advise its approximately 375 call center agents on state tax code. The AI will then inform what they pass on to California business owners asking for tax guidance.

Trained with massive data sets often scraped from the web without consent from authors, GenAI models can generate content like text, imagery and audio. A large language model such as OpenAI’s ChatGPT, which ignited interest in GenAI following its release in fall 2022, essentially predicts the next word in a sequence of input text, then outputs or generates text that reflects the training data.

What form would that take for you, the person calling in to the tax center? The tax department told CalMatters this technology will not be used without a call center employee there to review the answer, but a slide in its call for proposals seeking a vendor says any AI solution must “be able to provide responses to incoming voice calls, live chats, and other communications.”

That request for proposals, aimed at creating an AI to help the state with taxes, went out last month, and the process is scheduled to finish in April. A meeting with potential vendors attracted 100 attendees last month, department spokesperson Tamma Adamek told CalMatters.

The tax advice AI proposal is one of five proofs of concept California has launched to explore how state agencies can use GenAI, says California Department of Technology (CDT) Director and State Chief Information Officer Liana Bailey-Crimmins. Caltrans has two projects to explore whether GenAI can reduce traffic congestion and fatal accidents, and the California Health and Human Services Agency has two trials underway to explore whether GenAI can make it easier for people to understand and attain public benefits and assist in health-care facility inspections.

The winning AI tax proposal vendor will get a six-month contract, then state officials will determine whether a larger contract is warranted. The project must demonstrate shorter calls and wait times and fewer abandoned calls, and vendors must “monitor and report on GenAI solution responses for factual accuracy, coherence, and appropriateness.”

The project is the start of a yearslong, iterative AI regulation and adoption process that Gov. Gavin Newsom kicked off last fall. The executive order he issued requires state agencies to explore ways to use generative AI by July.

To limit risks, private companies awarded contracts will train AI models in a “sandbox” hosted on state servers, made to meet information security and monitoring standards set by CDT. Newsom’s executive order directs the technology department to release the sandbox in March for use by companies awarded contracts.

The state’s Government Operations Agency evaluated the benefits and risks of generative AI in November 2023. The report cautions that generative models can produce convincing but inaccurate results, deliver different answers to the same prompt, and suffer model collapse, when predictions stray away from accurate results. GenAI also carries the risk of automation bias, when people become overly trusting and reliant on automated decision-making.

Exactly how the tax agency’s call center employees will know to trust answers generated by large language models is unclear.

Adamek, the tax department spokesperson, said they are trained on basic tax and fee programs and, if they encounter a question they are unfamiliar with, they can ask more experienced team members for help. In July, CDT, in conjunction with other state agencies, is scheduled to help train state employees on recognizing wrong or fake text.

The tax department does not consider its planned use of generative AI high risk, Adamek said, because it’s focused on enhancing state business operations and all the data involved is publicly available. She said the tax department will assess risk later in the process. Standards and guidelines for state agencies that sign contracts with private companies are due out in the coming weeks.

While the tax department doesn’t consider using GenAI high risk, it’s unclear whether CDT will agree.

Newsom’s order requires all state agencies to deliver a list of high-risk forms of GenAI it’s using to CDT within 60 days. Bailey-Crimmins told CalMatters all the agencies under the governor’s authority have said they are not using high-risk generative AI.

A new law also requires CDT to inventory all high-risk forms of AI or automated decision-making systems in use by state agencies by September.

Outside of government, some are wary of some of California’s AI plans. Among them is Justin Kloczko, the Los Angeles author of Hallucinating Risk, a Consumer Watchdog report about AI patents that are held by banks and used in financial services and their potential for harm. He notes that documentation from San Francisco-based ChatGPT maker OpenAI warns that AI can pose a high risk when delivering essential services or providing financial advice.

“There’s still a lot we don’t know about generative AI, and what we do know is that it makes mistakes and acts in ways that people who study it don’t even fully understand,” Klozcko said. He also questioned the ease of determining whether that information is accurate in the hands of the call center employee who may not be qualified to determine whether text output by a large language model — made to sound convincing — is in fact inaccurate or false.

“I worry that workers in charge of this won’t understand the complexity of this AI,” he said. “They won’t know when they’re led astray.”

Bailey-Crimmins said that “we take those risks seriously” and that potential downsides will be taken into consideration when determining the next course of action after the six-month pilot project.

“We want to be excited about benefits, but we also need to make sure that what we’re doing is safeguarding,” she said. “The public puts a lot of trust in us, and we need to make sure that the decisions we’re making [are] not putting that trust in question.”

The following article first appeared on CalMatters.org.
Khari Johnson is CalMatters' first tech reporter. He previously covered artificial intelligence use by businesses and governments as a senior writer for WIRED and VentureBeat with an emphasis on policy, power, human rights abuses, and telling the stories of people demanding accountability after being harmed by automation. He previously covered local news in San Diego. He lives in Oakland.