There’s no “maybe” to it: Artificial intelligence is having a moment, and IT executives at many organizations are planning for, deploying and integrating the still-emergent tool into their existing technology stacks.
Public- and private-sector technology leaders considered the subject Sept. 19 at the inaugural California Government Innovation Summit*, during a discussion titled “AI in Action: Navigating Policy, Ethics and Guidance.”
Among those sharing ideas and discussing how companies may work with AI were Addie Cooke, global AI policy lead at Google Cloud; Martin Oberhofer, director and distinguished engineer for data fabric at IBM; Ashkan Soltani, executive director of the California Privacy Protection Agency; and Blake Valenta, deputy director of data programs and policy at the California Office of Data and Innovation.
Among the takeaways:
1. Having an AI governance framework is the best place to start.
At Google, AI is used across products; it’s vital for officials to be aware of how their models are being used. At Google Cloud, Cooke said, leaders have created a data stewardship committee to uphold the data sharing commitment to customers — which says the company will never use customer data in Google Cloud to train its models. Google also has its own version of equitable data sets, on which it does adversarial testing within its responsible innovation program. Customers are eager to understand the company’s governance process during public- and private-sector workshops she conducts, Cooke said, indicating that being able to point to the National Institute of Standards and Technology AI risk management framework and the forthcoming International Organization for Standardization’s ISO standard on AI management systems is helpful.
“And it's really just having a framework for customers to understand, ‘Okay, this is what AI governance looks like. Here's how data governance fits in,’” Cooke said. “I think the world is going to be imminently safer when every organization has their own AI and data governance process.”
2. Different generative AI approaches can bring new challenges around data.
A key difference between generative AI models and older versions, Oberhofer said, is that these are trained models — and in the shift from the random forest model to the large language model (LLM), unknowns can be introduced.
“Can you really trace all the unstructured data assets which went into that model?” Oberhofer asked the room, calling for a show of hands of those who had looked at unstructured data governance before.
One of the challenges with the LLM, he said, “is extending your data governance practices into the unstructured world” — and calibrating the LLM for a specific use case without changing the underlying structure.
“This use case-specific training has to be taken into consideration if you want to judge the prediction quality of an LLM as well, that makes a huge difference compared to before,” he said. “Understanding the data elements which went into the LLM is a challenge. The process of tuning it is a new approach.”
3. Data science work in San Francisco was foundational for the state Office of Data and Innovation’s Data Science Accelerator.
That work dates to 2017 and 2018, and the Data Science SF initiative, Valenta told a full room of more than 100 listeners, noting he and others who worked on the project had hoped to find “a nice, off-the-shelf, everything-worked-out ethical toolkit to help us think through the process.”
Such a toolkit proved to not be available, but San Francisco's then-Chief Data Officer Joy Bonaguro worked with Johns Hopkins University to create what was needed. (Bonaguro went on to serve as California’s CDO from February 2020-May 2023.)
“And so we brought that toolkit here, to the new Data Science Accelerator that we're starting, and that’s what we've been using to engage with, and it's something that we've found incredibly valuable,” Valenta said. “Because as you know, when you start on a project, things can go different ways, different directions, and you always want to check back in.”
4. Guidance on AI and automated decision-making is on the way.
The California Privacy Protection Agency, the nation’s first independent state agency focused on privacy, will be issuing regulations on automated decision-making (ADM) and on consumers' ability to opt out of ADM, and get “meaningful information” on ADM decisions, Soltani said.
“In the pre-rule making, we issued some draft regulations on risk assessments, related to essentially AI and ADM,” he said. “And we'll be issuing some guidance in the coming months, some draft regulation that I hope you all engage in."
*The California Government Innovation Summit is hosted by Government Technology magazine, a publication of e.Republic, which also produces Industry Insider — California.