IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Toolkit Targets Bias in Government Algorithms

A new algorithm toolkit could help local government guarantee that their automated decision-making processes are free of bias.

A new algorithm toolkit could help local government guarantee that their automated decision-making processes are free of bias.

The Ethics & Algorithm Toolkit is the product of a collaboration between the Center for Government Excellence (GovEx), San Francisco’s DataSF program, the Civic Analytics Network, and Data Community DC, which announced the effort recently in a press release. The toolkit is specifically aimed at ensuring fairness within algorithms as they pertain to the criminal justice system, higher-education processes, social media networks and other areas.

Andrew Nicklin, the director of data practices for GovEx, said this toolkit was born out of gov tech stakeholders identifying a growing need for both conversation and action around inadvertent bias in algorithms that enable automated governmental decision-making.

“Essentially, a conversation started in academic and government spaces about how to tackle these issues,” Nicklin said, “and, quite frankly, I think we’re only going to see an increase in the use of algorithms in government, with market forces driving that.”

Indeed, bias within such algorithms has been a topic of increased interest, with high-profile media outlets such as ProPublica conducting investigations into the matter. The ProPublica story — which Nicklin referenced as an example of the toolkit’s importance — looked at algorithms being used within government to predict whether individuals were likely to commit crimes. In the story, reporters found instances of racial bias.

Other instances of algorithm bias include teacher evaluations, in which a teacher who has consistently scored well on evaluations suddenly scores lower on automated check-ins. As such, the toolkit aims to be a sort of risk assessment tool, one with two parts: the first being a risk assessment framework and the second being a set of mitigations that add to risk factors.

The developers of the toolkit hope that this work will be “a place to start a conversation,” Nicklin said, rather than a comprehensive and immediate solution to a very complex issue. The intent is to help those in local government ask the right questions, along with some accompanying guidance on how to start work that will eventually lead to solutions for those problems.

It also serves as a starting point for the work because there hasn’t been anything else like it created to date.

The idea for the toolkit grew from conversations between GovEx and DataSF, which is San Francisco’s office for data work. Former Chief Data Officer Joy Bonaguro was also instrumental in its creation. Representatives of Data Community DC and the Civic Analytics Network — a Harvard University-based collaborative of municipal data stakeholders — also contributed to the toolkit's creation.

In terms of the future of this work, Nicklin said discussions are taking place as to what the exact next steps will be. The group behind the toolkit has organized workshops to spread the word, so more of those are likely in the offing. Nicklin also noted that the work may look outside the public sector to see how instances of bias in algorithms are handled by other institutions.

Zack Quaintance is the assistant news editor for Government Technology magazine. His background includes writing for daily newspapers across the country and developing content for a software company in Austin, Texas.