IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Austin Responds to Inflammatory AI-Generated Public Comment

The comment, phoned in during a November City Council meeting, has prompted discussion of how to keep meetings open, transparent and accessible for city constituents while preventing future incidents.

Closeup of a microphone with a large empty room full of tables and chairs blurred in the background.
Shutterstock
Austin is taking steps to address the submission of an artificial intelligence-generated public comment during a City Council meeting that included racist content.

Since the COVID-19 pandemic started, cities have made city council meetings more remotely accessible with technology, a shift that some localities have made permanent. However, through generative AI (GenAI), inauthentic audio can be used to shape public opinion.

This was a challenge Austin officials faced during a Nov. 7 City Council meeting, when an AI-generated comment was submitted during a public comment period. In a public post to the City Council message board, Mayor Kirk Watson addressed the AI-generated comment days after it occurred, emphasizing the city is actively working to prevent additional instances like this.

“What happened was unacceptable, and it also showed a huge flaw in the system that someone could use AI to manipulate — and kind of derail — what public comments should be about,” said Councilmember Zohaib Qadri.

Austin offers 10 intervals of up to three minutes each during each City Council meeting for public comment — when members of the public can speak about issues that matter to them. Qadri underlined the importance of keeping this channel of communication open while reserving those spaces for actual constituents.

The AI-generated comment identified itself as such at the end, he said, but the increasing sophistication of AI may hinder the identification of the use of such tools in the future. To address these risks, Qadri said the Office of the City Clerk and the city manager are working together to create solutions and best practices that will prevent these situations.

“We continually explore options that would further protect the integrity of the registration process without impacting community access,” a city spokesperson said in an email.

According to the spokesperson, the city does not share its technology security measures publicly. However, they noted these best practices will not include editing or pre-screening of a speaker’s comments.

Qadri said he hopes the city is able to determine who perpetrated this call and prevent the submission of additional AI-generated comments.

“Governments, by their very foundation, require transparency and open communication and a level of trust in order to best serve our constituents,” Qadri said, arguing GenAI can “destabilize an already fractured media environment” through its ability to rapidly spread disinformation.

The remark’s racist content is not unique to AI-generated comments, Qadri explained, as occasionally people make comments with hateful language toward certain ethnic or religious groups, or the LGBTQ+ community.

“I think we as a council should always look to push back,” he said. “We need to do whatever we can, through rhetoric or policy, to push back against hate and to make sure that people really feel that Austin is a safe community.”

In February, the City Council approved a resolution to create rules and best practices addressing AI technologies; in response to the resolution, the city issued a memorandum highlighting city standards for GenAI. The memorandum underlined that city staff are continually evaluating AI Advisory Committee recommendations in their implementation strategy development.

This story first appeared in Government Technology, Industry Insider — Texas' sister publication.
Julia Edinger is a staff writer for Government Technology. She has a bachelor's degree in English from the University of Toledo and has since worked in publishing and media. She's currently located in Southern California.