IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

IT Leaders Stress Ethical Implications of AI Implementation

Health and Human Services CIO Ricardo Blanco and IBM Global Consulting Leader for Trustworthy AI Phaedra Boinodiris met at a Government Technology event to discuss responsible AI policies.

A robotic hand pointing to the words "AI ethics" in white font surrounded by various white circles and symbols. Dark blue and black background.
Shutterstock
The featured panel at the Government Technology* Texas IT Leadership Forum last week had leaders from the public and private sectors share their thoughts on how entities can implement AI while considering ethical implications and forming responsible policies.

The event was moderated by Brian Cohen, vice president of the Center for Digital Government,* and featured Health and Human Resources Commission (HHSC) CIO Ricardo Blanco and IBM Global Consulting Leader for Trustworthy AI Phaedra Boinodiris. They discussed trust, transparency, accountability and risk.

According to Boinodiris, having AI accurately reflect the widely diverse communities that it serves is a sociotechnical challenge that organizations looking to take advantage of the technology must address.

“Ultimately, what we’re trying to do is have our own human values be reflected into technology,” said Boinodiris. “We expect our technologies like AI to not lie to us. We expect it to not discriminate. We expect it to be safe for us and for our children to use and anytime you talk about something that’s still so technical, you’ve got to approach it holistically. You’ve got to be thinking about what is the right organizational culture that is required to create AI responsibly.”

Boinodiris stressed that considering the human experience in relation to AI is vital in organizations that are concerned about whether they can trust AI.

“The purpose of AI is not meant to displace human beings,” said Boinodiris. “It is not meant to have control over people. It is meant to augment human intelligence, but then it is incumbent upon us practitioners to be thinking about what is the experience like for a human being when they’ve been augmented by AI? Have they been empowered? How do you measure that?”

Blanco added that trust is especially important in the public sector, where AI — which Blanco said is being used “whether we like it or not” — has access to highly sensitive data.

“In our case, I would say we need to put ourselves into the customers’ shoes,” said Blanco. “If I’m going to allow you to see my confidential information, I have to trust you.”

To mitigate the risks of untrained agency employees using AI, Blanco urged entities to educate executives and staff on the responsible use of AI and to put proper guardrails in place.

“A lot of it has to do with what governance, what policy, what structure, what safeguards you have in place, and that’s the way I would take it from our agency’s approach of what would be the expectation. If I was that customer, what would I expect for the individual on the other end to treat my information?”

Blanco advised state agencies to converse with one another about their AI policies and collaborate on how best to implement the technology in their organizations.

“That’s why we have partnerships,” Blanco explained. “We have the vendor community that we have conversations with. We also look at other state agencies: How are you doing? What policies do you have in place? Have you had the same challenge? How are you addressing that challenge? How can we work together to proliferate this training and how it’s used across the various agencies, because there’s a lot of state agencies within Texas that are already doing it. They already have policies in place, so again, making sure that we understand what’s available.”

Boinodiris named three key things organizations need to have when taking a holistic approach to organizational cultures: an open growth mindset, a diverse and inclusive workforce determining which data sets to use to train AI, and a multidisciplinary team developing AI and its governance.

“They have to understand things like design and sociology and anthropology and psychology and legal frameworks and more,” said Boinodiris. “When you talk about this subject of AI literacy, most people mistakenly think that 100 percent of the effort to develop AI is coding. It’s not true. Well over 70 percent of the effort is just figuring out what’s the right data to use to train these models. And they have to ask questions like, ‘Was this gathered with consent? Is this representative of all the different kinds of people that we need to serve? Is this even the right data to use for this particular problem?’”

*Government Technology and the Center for Digital Government are part of e.Republic, Industry Insider — Texas’ parent company.
Chandler Treon is an Austin-based staff writer. He has a bachelor’s degree in English, a master’s degree in literature and a master’s degree in technical communication, all from Texas State University.