IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Maintaining AI Innovation Will Require Cooperation, Purpose

Generative AI put government on its heels as officials tried to understand what it is and how to keep it from derailing the status quo. Now, a more proactive approach is taking hold, officials say.

Liana Bailey-Crimmins wearing a microphone while speaking.
State CIO Liana Bailey-Crimmins.
(Eyragon Eidam)
The emergence of generative AI (GenAI) was nothing short of a sucker punch for state government. It brought unencountered risks, new capabilities and the sort of regulatory quagmire not seen since social media and blockchain came on the scene.

But now, officials at the state and federal level say they have a clearer picture of the overall landscape, and what’s more, the tools to move ahead in a meaningful way. At the Joint California Summit on Generative AI last week, experts from government and academia shared how they see innovation in the space continuing.

State CIO Liana Bailey-Crimmins acknowledged the risks that GenAI poses, urging a security mindset, but added that it could also be a key technology in addressing the widening knowledge gaps that exist between the state’s IT workforce and rapidly aging legacy systems.

Having GenAI comb through the code within legacy systems run by COBOL or Assembly to identify opportunities to streamline and modernize is one area she sees a significant opportunity. “Wouldn’t that be wonderful, to accelerate the ability to modernize?” she said.

Stanford Institute for Human-Centered AI Senior Fellow Daniel Ho noted that AI could also prove useful for government workers trying to navigate the regulatory mazes that exist institutionally.

Bailey-Crimmins and U.S. General Services Administration (GSA) Administrator Robin Carnahan agreed that the best path forward during this “unusual moment” in technology is one based on purpose, rather than one based on buying a shiny new toy. The need for thoughtful and well-rounded regulation remains, despite some argument to the contrary from the academic side of the panel.

Carnahan also spoke in favor of more transparency where solutions are concerned, calling for an end to the software “black box.”

“What we need in government is to have more transparency so we can have better interoperability because we haven’t had that much in the past …,” Carnahan said.

A resounding theme across the panel discussion, moderated by Government Operations Agency Secretary Amy Tong, was the need to train staff and open the talent pipeline. The federal government has been aggressive in training for AI and GenAI, Carnahan said, and has also released a procurement guide.

Chris Manning, a senior fellow with the Stanford Institute for Human-Centered AI, said more focus should be put on K-12 math and science education to better prepare students for careers in IT and AI.

Manning also urged leaders and lawmakers to look beyond the big names in the industry and to smaller, more innovative startups. These startups need good open-source data and the right regulatory environment to continue to innovate, he said.

“It’s no different than a hammer. You can use a hammer in bad ways, but we don’t ban hammers, and in the same way we shouldn’t be banning open-source,” he said.

“If you’re disproportionately undermining the startup ecosystem, you’ll be killing the future prosperity of California,” he added.
Eyragon is the Managing Editor for Industry Insider — California. He previously served as the Daily News Editor for Government Technology. He lives in Sacramento, Calif.