IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Computing Expert Explores the Pursuit of Responsible AI

The Legislature’s Select Committee on AI and Emerging Technologies convened for more than five hours Monday; a UT computer science professor and researcher opened the day with a focus on AI for good and also gave a primer on the technology’s history and vocabulary.

Ai,,Machine,Learning,,Hands,Of,Robot,And,Human,Touch,On
Shutterstock
Good Systems is a University of Texas at Austin initiative bringing 30 departments with more than 120 researchers together in pursuit of responsible AI applications.

Professor Matthew Lease, a founding member of the cross-campus team, opened Monday’s testimony for the Texas House of Representatives Select Committee on Artificial Intelligence and Emerging Technologies.

The committee, created in recent weeks, is tasked with looking at the ways AI and new technologies impact ethics, society and the economy. The seven-member committee had a five-hour-plus session with several expert testimonies concerning multiple issues.

Lease said a diverse group of stakeholders — both technical and non-technical — is needed to guide generative AI adoption for good.

He explained that the Good Systems initiative seeks “the development of AI technologies that maximize societal benefits and minimize the risk of harm. … One of the things that separates us — and we believe really strongly — is that responsible AI really requires diverse perspectives and engagement. We can’t leave it to the technologists alone. And I can say this as a computer scientist.”

“You have to define what is responsible AI design and how to get there,” Lease said. “We really need to hear from a broad swath of people as we think about this question as a society.”

Drawing on examples from the past 15 years, he discussed AI and defined what it is already capable of doing, in addition to the current generation of spoken, visual and textual materials.

As it stands, AI is recommending, as in shopping websites, or deciding, as in spam filters; both actions examine inputs and recognize patterns.

Moving into machine learning and generative AI, he discussed data scale, data quality and data curation; random errors, bias and consistent bias; and unsupervised and supervised learning, i.e., human-managed or not.

Questions that arose during testimony included who manages the data and learning and who is responsible when AI goes wrong?

All factor into how AI will be used ethically and legally in the public sector, the crux of what the select committee is exploring during the coming weeks. They will be delivering a report in May, as assigned.
Rae D. DeShong is a Dallas-based staff writer and has written for The Dallas Morning News and worked as a community college administrator.