The report, set in motion by Gov. Gavin Newsom’s Executive Order N-12-23 (EO), signed Sept. 6, called on the California Government Operations Agency (GovOps), the California Department of Technology, the Office of Data and Innovation, and the Governor's Office of Business and Economic Development to work with other state units to identify “the most significant, potentially beneficial use cases for deployment of GenAI tools by the state.” But the report also required them to “explain the potential risks to individuals, communities, and government and state government workers,” with a particular focus on high-risk use cases that could involve GenAI making consequential decisions on access to essential goods and services. The areas of risk are:
- Validity and reliability: The report defines validation as the “confirmation through evidence that the requirements for a specific intended use or application have been fulfilled”; and reliability as the “ability of an item to perform as required, without failure, for a given time interval, under given conditions.” It notes AI systems that are inaccurate or unreliable engender increased risks and reduced trustworthiness. The risk of over-relying on automated GenAI recommendations to make decisions (automation bias), related to validity concerns raises concerns about outputs that “sound right” but aren’t factual, the report said. Ultimately, a reliance on such information could erode trust in government and its services.
- Safety: AI systems should not, under defined conditions, lead to a situation that endangers human life, health, property or the environment. GenAI tools can represent notable risks to public health and safety, whether from malicious intent or just lack of quality control. Their scaling capabilities have the potential to be used to spread misinformation and disinformation; and in sensitive domains like health care and public safety, GenAI should be evaluated to determine whether it’s necessary and beneficial, and given careful governance to mitigate risk.
- Accountability and transparency: The complexity of the GenAI model’s life cycle raises challenges in ensuring both, as such models may involve multiple organizations all contributing data. State government should be cautious about over-automating decisions or entirely removing human oversight of GenAI chatbots and text generators. Over-trusting such tools can lead to inaccurate information being provided to constituents and inaccurate public program determinations – with the possibility of undermining state progress on diversity, equity, inclusion and accessibility.
- Security and resiliency: GenAI systems can be vulnerable to unique attacks and manipulations, such as poisoning of AI training data sets, evasion attacks and interference attacks, the report said. Their capabilities raise concerns about enabling bad actors and undermining government security if they’re not properly governed; and new capabilities created by GenAI will bring new security risks. Strong new security controls, monitoring and validation techniques will be needed to safeguard against such attacks. Newsom’s EO mandates a classified joint risk analysis of potential threats to, and vulnerabilities of California's energy infrastructure; it requires a strategy be developed to assess threats to other critical infrastructure via GenAI.
- Explainability and interpretability: The difficulty in extracting human-interpretable explanations from GenAI is important for governments to consider as they seek to offer enough information on decisions impacting residents. GenAI models can be prompted to explain their reasoning, but techniques can be inconsistent as models have been found to misrepresent their stated reasoning. Techniques for extracting a model’s true logical reasoning can be unreliable.
- Privacy: GenAI models can leak personal data if they are not anonymized properly, and their training data properly secured. Models can also re-identify people who have been de-identified. Third-party plug-ins and browser extensions could collect data on user interactions with a GenAI model. Residents exercising their rights to remove personal data that’s online may make extracting and destroying their information contained in models difficult or unsustainable. And if a bad actor were to illegally access a state database, GenAI could power the leak of private data.
- Fairness: GenAI models can perpetuate societal biases if the data used to train them is imbalanced. Governments must proactively assess for algorithmic discrimination, the report said, including gender, racial or other biases; and particularly in high-impact areas like criminal justice, health care, mental health, social services and employment. If GenAI authorship isn’t made clear, algorithmic bias in state systems could be misattributed to the government instead of to GenAI. The larger size of GenAI data sets makes resolving embedded bias tougher.
- Workforce and labor impacts: GenAI adoption will introduce changes that support workers but will also change or modify aspects of their workflows. Staff may need upskilling to effectively use GenAI; and the technology could enable new ways of exploiting labor and encourage unsafe working conditions. GenAI could also be used by larger companies to further concentrate their power and tamp down competition.