IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

San Jose Releases Generative AI Guidelines, Looks to Learn

CIO Khaled Tawfik says the city is eyeing the possibility of one day using a generative AI that is specially tailored for city governments, but it wants to learn more before finalizing policies.

San Jose has joined a growing number of cities addressing generative AI tools, doing so with a set of new guidelines for using the tech, which city officials consider a living document.

Khaled Tawfik.
Khaled Tawfik.
Chief Information Officer Khaled Tawfik said he wants the city to use the technology, but in a responsible, controlled and transparent manner. Bias, cybersecurity and privacy are also known risks. As such, San Jose’s guidelines offer warnings and best practices, going so far as to prohibit certain uses, including evaluating job candidates or allocating resources. The guidelines apply to city staff, contractors, volunteers and anyone else working on behalf of San Jose.

Tawfik sees the guidelines as an opportunity to get ahead of issues and help the city learn from how employees use the tools. Governments missed this opportunity with social media, and so Tawfik is trying to step in early to advise about responsible use here, rather than have to correct later down the line.

“This is our attempt to learn from the past and be more ready about the future,” he said.

Releasing this guidance also makes it clear that San Jose is interested in this technology and looking to share ideas.

“The guidelines will help us start the conversation,” Tawfik said. As San Jose works to develop official policies, “the guidelines will help us at least to understand what we know and identify what we need to know.”

City employees are already using generative AI for drafting policies, memos and job postings, and so the IT Department wants employees to keep it informed, doing so in part by filling out a form recording each use. This will help the department learn from experiences and ultimately provide advice.

The IT Department also has task forces focused on areas of application. The groups discuss best practices, concerns and what to avoid. IT co-guides these conversations alongside relevant experts — for example, HR helps with the task force on generative AI for job postings.

Generative AI may offer efficiency and could be helpful for drafting materials or summarizing long documents, Tawfik said. But some shortcomings have become clear. For example, the tools tend to be inaccurate with math and numbers.

Privacy is also a concern, and IT advises generative AI users to assume any information entered will be exposed to the public. Materials unready for publication shouldn’t be entered, nor should private emails. Employees looking for help drafting emails should avoid copy-pasting messages into generative AI, instead prompting the tools to write a generic message they can fact-check or augment with personalized details. The guidelines advise users to fact-check with multiple credible sources, including peer-reviewed journals and official documents.

But while employees can confirm facts and figures, it’s harder to thoroughly check generated content for another problem: bias. Many generative AI tools are trained on public online info, which tends to have inherent biases, Tawfik said.

Right now, when the city purchases an AI system, it asks vendors questions intended to catch potential bias as well as what the vendor does to protect against it. But the city needs to develop more tools and protocols for checking bias and equity on its own as it digs further into generative AI, Tawfik said.

Ultimately, Tawfik hopes to eventually see the city adopt a private generative AI tool that is trained for government use — or, even better, specifically for city government.

One consideration is that government writing requires a particular style. Senate bills, for example, are written with a certain structure and formality. The city also uses gender-neutral language and the term “resident” rather than “citizen.” Using an AI tool trained for such needs would mean employees spend less time rephrasing and correcting generated content.

That’s still future-looking, though. Tawfik isn’t aware of an existing model, but San Jose has talked with several vendors about the possibility of AI trained on data from government, potentially restricted to San Jose data only.

“If we can train the model based on government data, I think [it] will be more consistent and reduce risk, potentially,” Tawfik said. “And the more agencies participate in this conversation and share the data for the training, we’ll see even higher results than we’re seeing today. But again, we’re just scratching the surface of this conversation and this technology.”

In the meantime, Tawfik said, the city wants to learn more before it replaces guidelines with policies. Ideally, policies would come by the end of the year, but that timeline could shift if the city feels it still has unanswered questions or concerns. Tawfik said the technology also needs to stabilize so the city can feel confident its policies will remain relevant.

“We want to have a policy when we feel it is mature enough and ready to be shared,” Tawfik said. “Otherwise, it’s going to be restrictive, it might be prohibitive, it might not be as effective.”

This article first appeared in Government Technology, sister publication to Industry Insider — California.
Jule Pattison-Gordon is a staff writer for Government Technology. She previously wrote for PYMNTS and The Bay State Banner, and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.