IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Oversight Group Examines Progress, Next Steps on GenAI

A blog post on generative artificial intelligence from the Little Hoover Commission looks at Gov. Gavin Newsom’s recent executive order and reports progress in the field.

A person blurred in the background holding out their hand palm up in focus in the foreground. Hovering above their palm are the words "Ai" and "ChatGPT" as well as symbols to indicate digital connectivity.
Shutterstock
Much remains to be done if California is to take the lead in what it calls the “GenAI race,” but an independent state oversight agency finds considerable work underway and lauds Gov. Gavin Newsom for his recent executive order on the nascent technology.

In a recent blog post, the Little Hoover Commission (LHC), a state agency that investigates state government operations and policy and recommends to the governor and lawmakers how state operations can be improved and economized, assessed the landscape on generative artificial intelligence. Among the takeaways:

  • Public education is a vital component of the state’s work to “harness and mitigate the power of GenAI,” the Sept. 21 post by Communications and Outreach Analyst Allie Powell said. Basic understanding of the AI technology is still low in spite of the huge popularity surge of tools like Bard and ChatGPT, Powell wrote, indicating press coverage has predominantly focused on the consequences of GenAI — not explaining what it is, how it can be used or how to balance risks with its “potential for progress.” LHC Commissioner David Beier pointed that out in a recent episode of The Global Enterprise Thread podcast, Powell said.
    “Explainability coming from both the creators of AI and the users of AI can help people be less fearful of the technology and more accepting of how it can help them in their daily lives,” Beier said in the episode, which featured former U.S. Senate Majority Leader Tom Daschle. The episode looked at “How Regulatory AI Guardrails Will Shape Markets and Impact Business.”
  • Gov. Gavin Newsom’s recent executive order tasking state agencies with studying the development, use and risks of GenAI in state government addresses a “broad range of issues such as effects on the labor market, potential threats to state infrastructure, and the need for public education,” Powell wrote in “Generative Artificial Intelligence: California’s Next Steps,” noting it follows the Biden-Harris administration’s announcement of a forthcoming federal executive order on artificial intelligence. Newsom’s executive order, Powell said, addresses similar topics to those in the LHC’s 2018 report, Artificial Intelligence: A Roadmap for California. That report found that the state had made “some strides to use modern technology to improve operations and services” but was still “ill prepared for the inevitable changes AI will bring” and lacked the necessary infrastructure to plan strategically to capitalize on AI technology while minimizing risks. Since 2018, the post indicates, California has “launched multiple initiatives to better prepare and strategize for the GenAI world,” including the California Department of Technology’s Artificial Intelligence Community of Practice in 2020, aimed at increasing awareness of GenAI and related projects.
  • GenAI has also been on lawmakers’ minds; State Senate Bill 294, an intent bill from state Sen. Scott Wiener, D-San Francisco, expressed the Legislature’s intent to pass laws on AI around, “among other things, establishing standards and requirements for the safe development, secure deployment, and responsible scaling of frontier AI models in the California market.” This would be done in part, the bill said, by creating a framework of disclosure requirements for AI models. The proposed law was withdrawn but subsequently reintroduced Sept. 14 to the Senate Rules Committee. Menlo Park Democratic state Sen. Josh Becker’s SB 721 would have created a California Interagency AI Working Group to report to the Legislature on AI, recommendations on a definition relative to “its use in technology for use in legislation,” and to help determine which agencies should develop and oversee AI policy and its implementation. The bill made it as far as the Assembly Committee on Privacy and Consumer Protection, where it has remained since June 1.
  • A key point of the blog post, LHC Executive Director Ethan Rarick said in an interview, was to highlight the 2018 report, which he said the Commission’s officials consider prescient — and a topic which it could revisit again. (The Commission offered further recommendations on AI in 2022.) “Obviously, 2018 to now on this topic, a lot has changed, but we still think there are a lot of relevant recommendations in that report that, in broad measure, make sense and are important,” Rarick said. No new studies are currently authorized but, he added, “the Commission could decide to revisit that topic. It often does that.”
Theo Douglas is Assistant Managing Editor of Industry Insider — California.