Jul 15 2019
Data Analytics

AI Tools and Analytics Provide Police with Critical Insights

Big cities like New York City deploy advanced analytics, but machine learning sparks concerns of racial bias.

A few years ago, the Chicago Police Department established six high-tech crime centers to connect surveillance cameras, gunshot detection platforms, predictive mapping and data analytics in high-crime neighborhoods.

As Government Technology reports, crime fell in neighboring areas. Chicago is now taking analytics even further with predictive analytics. The city works with Microsoft and Genetec to assist an integrated decision support system that provides police with crime insights.

“AI is the next logical evolution in policing,” Jonathan Lewin, chief of CPD’s Bureau of Technical Services, tells Government Technology. “We have all this data, a lot of sensors, and incoming information from other open sources, including crime tips from citizens. So, plugging all of this into some kind of engine to gain insights and make connections that wouldn’t be obvious to a human is the next logical step.” 

“Genetec is taking advantage of Azure Media Analytics, a portfolio of machine-learning capabilities hosted on the Azure Media Services platform. For example, using the built-in indexer, the company can automatically generate captions or metadata, and enable users to easily search and analyze multiple files,” Microsoft says in a blog post describing the system in Chicago.

AI technology that automates certain human tasks can use existing information to bolster predictive policing. As they wade into this pool, police departments around the country are finding artificial intelligence applications to be transformative. But critics say analytics tools like the integrated decision support system are only as good as the data being analyzed.

Cybersecurity_IR_howstrong_700x220.jpg

Machine Learning Assists Police in Recognizing Patterns

In 2012, New York City launched the Domain Awareness System, which tracks targets and gathers information about them, according to Fast Company.

In recent years, the New York Police Department distributed smartphones to its officers that connect to the Domain Awareness system, which the department says is “one of the world's largest networks of cameras, license plate readers, and radiological censors, designed to detect and prevent terrorist acts, but also of great value in criminal investigations.”

In 2017, New York also hired 100 civilian analysts to use a program called Patternizr, StateScoop reports. The program is also available to all officers through the Domain Awareness System.

“A collection of machine-learning models, which the department calls Patternizr, was first deployed in December 2016, but the department only revealed the system last month when its developers published a research paper in the INFORMS Journal on Applied Analytics,” according to StateScoop.

“The models that comprise Patternizr are supervised machine-learning classifiers; that is, they are statistical models that learn from historical examples where classifications are known and are then used to predict the classification for samples for which the classifications are unknown,” the INFORMS Journal report reads, according to GCN. “In the case of Patternizr, each example is a pair of crimes, and the classification is whether the two crimes are in a pattern together.”

“Other police departments could take the information we’ve laid out and build their own tailored version of Patternizr,” NYPD Assistant Commissioner of Data Analytics Evan Levine tells GCN.

MORE FROM STATETECH: Explore how AI-based tools assist state and local governments.

Automated Tools are Only as Good as Data Under Analysis

Critics charge that AI tools can reinforce bias in policing. If a tool is analyzing crime reports with the majority of suspects African American, for example, it will return results that skew toward African Americans.

San Francisco is trying to leverage AI to remove racial bias from reports, according to Forbes

“Between 2008 and 2014, African Americans accounted for 43% of people booked into jail despite only making up 6% of San Francisco’s population, according to a 2017 report on the DA’s office,” Forbes says. While the San Francisco district attorney report did not find significant evidence of overt racial bias in prosecutors' cases, there could still be instances of implicit bias. San Francisco's AI tool will try to control for that by removing references to race in police reports.

Alex Chohlas-Wood, one of the creators of San Francisco’s tool, helped New York City to create Patternizr, the NYPD machine learning tool. Chohlas-Wood is deputy director of the Stanford Computational Policy Lab, which developed the district attorney's AI tool along with an accompanying web platform.

Experts anticipate that adoption of machine learning tools will continue to grow, particularly in major cities that face challenges of analyzing Big Data. And public safety agencies soon will also decide whether to extend those capabilities with predictive analytics.

This article is part of StateTech's CITizen blog series. Please join the discussion on Twitter by using the #StateLocalIT hashtag.

CITizen_blog_cropped_0.jpg

Getty Images/ Frederic Prochasson
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT