IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

AI in Government: AI Law, Use Cases and Challenges

Embracing AI in government demands an AI policy that encompasses AI law, addresses AI governance and strategically invests in the evolving AI landscape.

Pluralsight-2-5-24-1.PNG
It’s no surprise government agencies want to leverage AI technology. After all, it’s the hottest new technology with a wide range of use cases.

But common government challenges, like budget concerns, safety risks, and compliance questions, also apply to AI adoption. If you’re looking to adopt AI in government, here’s what you need to know to make the most of it.

Want to learn more about the AI landscape and worker readiness? Download the AI skills report.

Table of contents



AI in government: Investments on the rise

In both private and public sectors, interest—and investment—in AI is only growing. The Pluralsight AI skills report found that 4 in 5 private sector organizations plan to increase AI spending in the next year.

Government spend is increasing, too. According to the Deltek Federal Artificial Intelligence Landscape report, federal AI spending has increased by 36% from 2020 to 2022, with the majority of these funds going towards AI research and development.

AI government use cases


Government agencies are diverting more of their limited budgets to AI because they recognize its potential to streamline and advance current processes and systems.

For government specifically, potential AI/ML use cases include:

  • Operations and management: AI can perform spend analysis, demand forecasting, and market intelligence to help teams plan, allocate resources, and determine budgets.
  • Task automation: AI automates tasks and reduces repetitive busywork, such as reviewing data, monitoring suppliers, and drafting grants. 
  • Cybersecurity: AI can automate incident response and threat detection, conduct risk assessments, boost vulnerability detection, and improve visibility.
  • Data analysis: AI enables faster insights and decision making by collecting and analyzing data.
  • Predictive analytics: AI can analyze large amounts of data to make predictions and take preventative actions. For example, it can identify real-time traffic patterns to reduce congestion at peak hours.
  • Constituent support: AI-powered chatbots and voice bots help constituents find answers to frequently asked questions and get assistance faster. Call centers and 311 lines are common uses for generative AI.

Agencies can also use AI/ML for things like transportation safety, medical support, space operations, and first responder awareness.

Challenges of AI in government


There’s no shortage of AI use cases for the government, but new and existing challenges can make AI adoption a daunting prospect.

Legacy infrastructure and systems impede modernization


Government agencies often use legacy systems that aren’t designed to work with AI/ML implementations.

To overcome this challenge and use AI effectively, organizations will need to modernize their data, network, cloud, and cybersecurity capabilities. This includes modernizations and improvements across:

  • Data management, data cleansing, and data tagging
  • Data security and Zero Trust Architecture
  • Cloud infrastructure, engineering, and cloud services

AI governance and compliance are still developing


AI technology advances every day, making AI law and governance a moving target. In general, though, the White House’s Blueprint for an AI Bill of Rights outlines five key principles to follow when building, using, or deploying AI systems.

  1. Safe and effective systems: Systems need pre-deployment testing and ongoing monitoring to ensure they’re safe, effective, and proactively protect users. 
  2. Algorithmic discrimination protections: Designers, developers, and deployers must design and use algorithms and systems in an equitable way to prevent discrimination.
  3. Data privacy: Designers, developers, and deployers must create built-in data privacy protections and give users agency over how their data is collected and used.
  4. Notice and explanation: Automated systems should provide clear explanations about how they’re used and how they determine outcomes that affect the user.
  5. Human alternatives, consideration, and fallback: Users should be able to opt out of automated systems and work with a human instead, especially if the system fails, creates an error, or the user wants to contest the output.

The White House also released AI implementation guidance for federal agencies specifically. This includes three main pillars:

  1. Strengthening AI governance: Designate Chief AI Officers who are responsible for coordinating their agency’s AI use, advising leaders on AI, and managing AI risks.
  2. Advancing responsible AI innovation: Develop an AI strategy and remove barriers to responsible AI use and maturity, such as outdated cybersecurity approval processes.
  3. Managing risks from AI: Determine AI uses that impact rights and safety, follow AI risk management practices, and provide transparency.

For more guidance, check out:


AI security risks present new threats and vulnerabilities


Cybersecurity is one of the biggest challenges for federal agencies. AI adds another layer of complexity. AI governance frameworks can help organizations mitigate AI cybersecurity risks.

The NIST AI Risk Management Framework offers advice on the design, development, use, and evaluation of AI tools and systems. The OWASP AI Security and Privacy Guide provides guidance on dealing with AI privacy and security.

Learn more about the state of federal cybersecurity and the impact of AI on cybersecurity.

Bias and misinformation impact accuracy


If an AI model pulls from data sources with biased or inaccurate information, its output will also be biased or inaccurate. Because of this, data accuracy can be an issue for government agencies.

To mitigate this risk, agencies can use sources of information they can control, like their own websites, to train their generative AI models. They can then limit searches to these controlled sources.

Unfortunately, even controlled sources of information can be inaccurate. For example, a website may be outdated or missing certain information. Organizations that plan to power their AI tools with their website or similar sources need to ensure these sources are always up to date and accurate.

Workers lack AI skills


95% of executives and 94% of IT professionals believe AI initiatives will fail without staff who can effectively use AI tools. But only 40% of organizations have formal structured training and instruction for AI skills.

To use AI tools successfully, organizations need to bring their workforce up to speed with AI training and skill development, data science knowledge, and relevant soft skills, such as critical thinking.

Don’t know where to start? Consider AI explained, prompt engineering, AI for cyber defense, and other Pluralsight AI courses.

Learn how to fill the tech talent gap in the public sector.

AI isn’t a magic solution for government challenges


For some organizations, AI can sound like an appealing alternative to human employees. But AI won’t solve all your problems—you still need human intelligence to review drafts, create policies, and make decisions that impact your mission.

Developing an AI policy and strategy for government


Agencies need an AI policy and AI skills strategy to advance mission-critical objectives with this new technology.

Start by determining how you’ll use AI. Then perform a risk assessment and create a plan to handle the challenges of AI and upskill your employees.

Ready to start your AI digital transformation? Learn more about Pluralsight’s AI solution.
Pluralsight is the technology skills platform for IT leaders who need to evaluate the technical abilities of their teams, align learning to key agency objectives and close skills gaps in critical areas like cloud, security and emerging technologies