IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Oversight Group Offers Artificial Intelligence Recommendations

The guidance from the state Little Hoover Commission follows its 2018 report “Artificial Intelligence: A Roadmap for California” and, in part, looks at progress since then.

A layout of a brain formed by blue lines with one side looking like a computer chip to indicate artificial intelligence. Gradient blue and black background.
An independent oversight agency charged with probing and making recommendations on government operations and policy has issued several recommendations on artificial intelligence.

In a recent “Lessons from Research” post on “How California Can Better Harness the Power of Artificial Intelligence,” the Little Hoover Commission (LHC) offers a bit of a follow-up to its 2018 report, Artificial Intelligence: A Roadmap for California. There, the Commission notes that the state had “fallen troublingly behind other states, cities, and countries in using AI to improve its economy, public health and safety, jobs, and the environment.”

The LHC finds that the state has “since made progress to better prepare for and strategize for an AI world,” and it highlights the California Department of Technology’s (CDT) Fall 2020 launch of the Artificial Intelligence Community of Practice, which aims to drive awareness of AI and related projects and is open to “anyone interested in engaging and collaborating on how to responsibly adopt AI technologies.” But more must be done, according to LHC, which points out that the report called on the governor and Legislature to “adopt an AI policymaking agenda” capable of providing leadership on the issue sufficient to grow the economy, enhance service to residents and promote privacy and transparency in the use of AI.

“... Fundamentally, most of our recommendations remain important issues, and we hope to see additional work on these issues from policymakers,” LHC Executive Director Ethan Rarick told Techwire via email.

“Under the California Department of Technology’s current statutory authority, new technologies such as artificial intelligence are being evaluated with the focus on improving the state’s digital service delivery to California residents,” Pam Haase, CDT’s deputy director of strategic initiatives, told Techwire via email. “CDT is currently in the policy planning process and working with several departments on their current or planned use of artificial intelligence to ensure the ethical and responsible use of this technology.” In its Jan. 25 post, LHC recommended:

  • The state appoint an AI special adviser in Gov. Gavin Newsom’s Cabinet to work with colleagues in AI across state and local entities — and to “create a strategic plan that incorporates AI within state government” and oversee its “safe and transparent deployment” by the state. Here, the Commission “didn’t want to be overly prescriptive” and left the strategic plan’s precise nature “flexible to allow for changing technologies, state needs, etc.,” Rarick said.
  • Each state agency create “strategic plans that include the use and implementation of AI technology to improve and enhance operations and services within an ethical framework that promotes the use of AI for economic, social, and environmental good.” In July 2018, LHC surveyed 84 state agencies on the applications and implications of AI in their organizations, it said in its report that year; 80 percent said their entities weren’t using AI, though 23 percent said their entity intended to use AI in the next decade. Additionally, the report said, several indicated their department was “using technology to automate tasks.”
  • The state create an AI commission made up of “knowledgeable professionals and experts in the field” from private industry, government, nonprofits, unions and academia to lead on the policy challenges and opportunities in AI’s implementation at the state. The 2018 report described an AI commission as having “three broad purposes,” Rarick said: developing “AI-related demonstration projects for critical state services,” incorporating successes into the state, and advancing the effective state use of data science.
  • The state improve data collection “related to jobs at risk from new technologies” by requiring the Employment Development Department to collect and evaluate data identifying how AI’s impacts will vary by “region, employment sector, socioeconomic group, and more.” This recommendation, Rarick said, focused on getting the data and analyzing it. “To be honest, the ultimate outcome of how that analysis is used might very likely depend on what is discovered,” he said, noting the 2018 report called for the data to “be analyzed to determine how the impact of AI will differ by region, county, socioeconomic group, etc.”
Theo Douglas is Assistant Managing Editor of Industry Insider — California.