IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

California Cities Part of Research into Using AI to Expose Police Bias

Algorithms that predict crime might be used to uncover bias in policing, instead of reinforcing it, according to new research.

A group of social and data scientists has developed a machine learning tool that it hoped would better predict crime. The scientists say they succeeded, but their work also revealed inferior police protection in poorer neighborhoods in eight major U.S. cities, including Los Angeles, San Francisco, Austin, Chicago, Atlanta, Detroit, Philadelphia, and Portland, Ore.

Instead of justifying more aggressive policing in those areas, however, the hope is the technology will lead to “changes in policy that result in more equitable, need-based resource allocation,” including sending officials other than law enforcement to certain kinds of calls, according to a report published last week in the journal Nature Human Behaviour.

The tool, developed by a team led by University of Chicago professor Ishanu Chattopadhyay, forecasts crime by spotting patterns amid vast amounts of public data on property crimes and crimes of violence, learning from the data as it goes. Chattopadhyay and his colleagues said they wanted to ensure that the system not be abused.

“Rather than simply increasing the power of states by predicting the when and where of anticipated crime, our tools allow us to audit them for enforcement biases, and garner deep insight into the nature of the (intertwined) processes through which policing and crime co-evolve in urban spaces,” their report said.

For decades, law enforcement agencies across the country have used digital technology for surveillance and predicting in the belief that it would make policing more efficient and effective. But civil liberties advocates and others have argued that in practice, such policies are informed by biased data that contribute to increased patrols in Black and Latino neighborhoods or false accusations against people of color.

Chattopadhyay said previous efforts at crime prediction didn’t always account for systemic biases in law enforcement and were often based on flawed assumptions about crime and its causes. Such algorithms gave undue weight to variables such as the presence of graffiti, he said. They focused on specific “hot spots” while failing to take into account the complex social systems of cities or the effects of police enforcement on crime, he said. The predictions sometimes led to police flooding certain neighborhoods with extra patrols.

His team’s efforts have yielded promising results in some places. The tool predicted future crimes as much as one week in advance with roughly 90 percent accuracy, according to the report.

Running a separate model led to an equally important discovery, Chattopadhyay said. By comparing arrest data across neighborhoods of different socioeconomic levels, the researchers found that crime in wealthier parts of town led to more arrests in those areas, at the same time as arrests in disadvantaged neighborhoods declined.

The opposite was not true. Crime in poor neighborhoods didn’t always lead to more arrests — suggesting “biases in enforcement,” the researchers concluded. The model is based on several years of data from Chicago, but researchers found similar results in the seven other cities in the study as well.

The researchers decided to make their algorithm available for public audit so anyone can check to see whether it’s being used appropriately, Chattopadhyay said.

Most machine learning models in use by law enforcement today are built on proprietary systems that make it difficult for the public to know how they work or how accurate they are, said Sean Young, executive director of the University of California Institute for Prediction Technology.

Given some of the criticism of the technology, some data scientists have become more mindful of potential bias.

Despite the study’s promising findings, it’s likely to raise some eyebrows in Los Angeles, where police critics and privacy advocates have long railed against the use of predictive algorithms. In 2020, the Los Angeles Police Department stopped using a predictive-policing program called PredPol that critics argued led to heavier policing in minority neighborhoods.

At the time, Police Chief Michel Moore insisted he ended the program because of budgetary problems brought on by the COVID-19 pandemic. He had previously said he disagreed with the view that PredPol unfairly targeted Latino and Black neighborhoods. Later, Santa Cruz became the first city in the country to ban predictive policing outright.

Chattopadhyay said he sees how machine learning evokes The Minority Report, a novel set in a dystopian future in which people are hauled away by police for crimes they have yet to commit.

But the effect of the technology is only beginning to be felt, he said: “There’s no way of putting the cat back into the bag.”

©2022 Los Angeles Times. Distributed by Tribune Content Agency, LLC.