Cities Give Predictive Policing a Second Look

The data-driven technology offers some cities positive results, but questions swirl around the equity and long-term effectiveness of the practice.

It may sound like a sci-fi movie, but Big Data is helping police spot where criminals will strike next.

Predictive policing, which calls on analytics and digital tools to help police officers better predict, understand and fight crime, is showing real-world results when it comes to dropping crime rates.

But as police departments in cities and counties around the country pilot the technology, new concerns are cropping up around the equity and long-term efficacy of the practice..

SIGN UP: Get more news from the StateTech newsletter in your inbox every two weeks

Predictive Policing Practices Come Under Fire

Earlier this year, Chicago’s 7th District Police reported plummeting crime rates as a result of algorithms and tools, such as digital maps, which augment the city’s police force of more than 12,500 officers, Reuters reports.

These technologies, situated in control rooms in four police headquarters around the city, are part of an experiment in predictive policing aimed at reducing the rate of violent crime in the city. And the tech appears to be doing just that. In the 7th District, where the tech has had the largest impact, the number of shootings between January and July of this year dropped 39 percent compared to the same period last year.

The murder rate also dipped 33 percent from January to July in the district, while the murder rate in the city as a whole rose.

“The community is starting to see real change in regards to violence,” Kenneth Johnson, the 7th District commander, told Reuters.

But even as these efforts are greeted with a measure of success, questions arise around whether the police are simply treating the symptoms of larger issues and not the underlying causes.

Real answers are hard,” Andrew Ferguson, a law professor at the University of the District of Columbia who has written a book on police technology, told Reuters. “They involve better education, better economic opportunity, dealing with poverty and mental illness.”

Elsewhere, similar practices are met with other questions, like whether or not the algorithms, built on existing and possibly flawed data, are perpetuating bias.

Atlantic City is using the technology to drop crime rates, even after making deep cuts in staffing.

“The first six months of this year, our violent crime is down about 20 percent compared to the same time last year. But at the same time our arrests are also down 17 percent,” Police Chief Henry White Jr., told The Philadelphia Inquirer.

Civil rights groups, like the ACLU, however, have voiced concerns that a lack of transparency about the algorithms and practices used to inform predictive policing is preventing a well-informed public debate about the tools and whether or not they are furthering racial bias and contributing to the over-policing of minority communities.

Digital Tools Could Help Erase Digital Bias

Amid such questions, a tool has emerged that could help the engineers and product managers designing the systems mitigate some of these issues for interested police forces. A digital decisions tool developed and released by the Center for Democracy & Technology looks to “help developers understand and mitigate unintended bias and ethical pitfalls as they design automated decision-making systems,” CDT Policy Analyst Natasha Duarte explained in a blog post.

Informed by extensive research on machine learning, the interactive tool looks to translate “principles for fair and ethical automated decision-making into a series of questions that can be addressed during the process of designing and deploying an algorithm,” Duarte wrote. “The questions address developers’ choices, such as what data to use to train an algorithm, what factors or features in the data to consider and how to test the algorithm. They also ask about the systems and checks in place to assess risk and ensure fairness.”

The tool is still a work in progress — the CDT will continue to retool it based on feedback and continuing research — but even when it is completed, Duarte didn’t promise that it will erase all issues or challenges that come with the new practice and the potential future pitfalls of the technology.

This tool is not a panacea for algorithmic bias and disparate outcomes,” Duarte wrote. “The technology community, academics and civil society must continue to engage in research, information sharing and the development of technical tools to help mitigate the risk of discrimination and other harms in automated decision-making.”