IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

The Importance of Diversity in AI Isn’t Opinion, it’s Math

We all want to see our ideal human values reflected in our technologies. We expect technologies such as artificial intelligence (AI) to not lie to us, to not discriminate, and to be safe for us and our children to use. Yet many AI creators are currently facing backlash for the biases, inaccuracies and problematic data practices being exposed in their models. These issues require more than a technical, algorithmic or AI-based solution. In reality, a holistic, socio-technical approach is required.

The math demonstrates a powerful truth
All predictive models, including AI, are more accurate when they incorporate diverse human intelligence and experience. This is not an opinion; it has empirical validity. Simply put, when the diversity in a group is large, the error of the crowd is small — supporting the concept of “the wisdom of the crowd.” In an influential study, it was shown that diverse groups of low-ability problem solvers can outperform groups of high-ability problem solvers (Hong & Page, 2004).

Model (in)accuracy
For generative AI to better reflect the diverse communities it serves, a far wider variety of human beings’ data must be represented in models.

Evaluating model accuracy goes hand-in-hand with evaluating bias. We must ask, what is the intent of the model and for whom is it optimized?

A very human challenge: Assessing risk before model procurement or development
Emerging AI regulations and action plans are increasingly underscoring the importance of algorithmic impact assessment forms. The goal of these forms is to capture critical information about AI models so that governance teams can assess and address their risks before deploying them. Typical questions include:

  • What is your model’s use case?
  • What are the risks for disparate impact?
  • How are you assessing fairness?
  • How are you making your model explainable?

Though designed with good intentions, the issue is that most AI model owners do not understand how to evaluate the risks for their use case. A common refrain might be, “How could my model be unfair if it is not gathering personally identifiable information (PII)?” Consequently, the forms are rarely completed with the thoughtfulness necessary for governance systems to accurately flag risk factors.

A model owner—an individual—cannot simply be given a list of checkboxes to evaluate whether their use case will cause harm. Instead, what is required is groups of people with widely varying lived-world experiences coming together in communities that offer psychological safety to have difficult conversations about disparate impact.

Welcoming broader perspectives for trustworthy AI
IBM® believes in taking a “client zero” approach, implementing the recommendations and systems it would make for its own clients across consulting and product-led solutions. This approach extends to ethical practices, which is why IBM created a Trustworthy AI Center of Excellence (COE).

“Interested in AI? Interested in AI ethics? You have a seat at this table.” The COE offers training in AI ethics to practitioners at every level. Both synchronous learning (teacher and students in class settings) and asynchronous (self-guided) programs are offered.

Read more about how the Trustworthy Center of Excellence (COE) can help your Agency or Department on this all importance topic in the full blog written by Phaedra Boinodiris, Global Leader for Trustworthy AI, IBM Consulting.

For any questions or requests contact your IBM Representative today.

Restlessly reinventing since 1911, International Business Machines Corporation (IBM) has decades of experience strategically partnering with leading government organizations all over the globe, to help them build innovative technology solutions that accelerate the digital transformation of government.