That is why Ronen was planning to introduce legislation Tuesday to the Board of Supervisors requiring the city’s Department of Technology to keep a public list of where and how AI technology is used across the city and county, and the reasons for it.
“This is basically just a transparency bill,” Ronen told the Chronicle. “We’re not prohibiting any uses” of AI.
The legislation would also require an impact assessment of the city’s AI programs with the goal of determining their potential to displace workers, make biased decisions, create security risks and intrude on privacy, among other concerns. Ronen said those assessments would include the input of labor and privacy experts. The legislation would also direct the city’s Department of Technology to write standards for buying new AI programs.
San Francisco’s recently appointed chief information officer, Michael Makstman, told the Chronicle in a recent interview that having more than 50 city departments with their own IT systems and staff who do not report to him was among the biggest challenges he faces in his new role.
“There is no gavel to bang on the table and demand that everybody come to attention” if the legislation passes and he is charged with implementing it, Makstman told the Chronicle in an interview discussing the legislation. “The best tool is to help people understand why we’re doing this, what is going to happen and how they will be involved,” he said.
He said the bill is an outgrowth of the city’s AI working group, which has been studying the technology and helped produce the mayor’s guidelines on how city workers should use it. Makstman plans to hire an emerging-technologies director to help with the AI-related workload. As far as taking input on the effect of AI programs, Makstman said, he is still figuring out what those forums will look like and whether they will be public hearings.
“It’s really important ... for the technologist to hear the labor voice, for the ethicist to hear the privacy voice” about how the technology could impact different people and jobs, Makstman said.
Under the legislation, Makstman would have six months to publish an initial inventory of the AI programs the city is using. He would have a year from when the bill takes effect to write up the AI impact assessment for each program. Any new programs would have to be added to the list with an impact assessment.
Ronen said she wanted to avoid the negative consequences that came from failing to act quickly enough to legislate problems that emerged from social media and ride-sharing companies.
“We’re always trying to play catch-up to this technology, and we now need to get ahead of it,” she said.
The city is already using AI tools, as Ronen’s legislation notes. Those include the city’s 311 mobile app, which that uses AI to automatically suggest a type of service based on a user’s written description or photo of a reported issue. Radiologists in the city’s public health department are also using AI-based imaging tools to confirm stroke diagnoses and to support physicians diagnosing issues on CT scans.
Makstman’s department also uses AI as part of its digital security tools to detect and prevent cybersecurity threats.
Ronen said she has heard positive feedback on the proposal from organized labor, including the SEIU 1021 union, which represents 16,000 city workers including janitorial staff and nurses, since it will help them know what AI-related clauses to include when they bargain for contracts with the city.
Union President Theresa Rutherford said she is concerned about AI picking away at workers’ job duties or being used to automatically eliminate certain types of people from hiring pools, for example.
She said the legislation would allow her union to ensure that AI is not used 'to remove workers ... from the workplace, and important resources from the community."
The board passed legislation earlier this year banning the use of algorithms by landlords to set rent prices in the city. San Francisco already has a law passed in 2019 that bans the use of facial recognition technology — which can use AI — by the police. A recent lawsuit alleged the San Francisco Police Department outsourced the use of the technology to other departments to get around the ban.
(c)2024 The San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.