If predictive policing systems rely on “dirty data,” they run the risk of exacerbating discriminatory actions toward minority suspects in the criminal justice system, a new study suggests.
Dirty data refers to data created from flawed, racially biased, and sometimes unlawful practices.
Law enforcement has come under scrutiny in recent years for practices resulting in disproportionate aggression toward minority suspects and researchers wanted to know if technology—specifically, predictive policing software—might diminish those discriminatory actions.
For the paper, available on SSRN, researchers used case study data from Chicago, New Orleans, and Arizona’s Maricopa County.
“We chose these sites because we found an overlap between extensively documented evidence of corrupt or unlawful police practices and significant interest, development, and current or prior use of predictive policing systems,” says Jason Schultz, a professor of clinical law at New York University and one of the paper’s coauthors. “This led us to examine the risks that one would influence the other.”
Researchers identified 13 jurisdictions (including the aforementioned case studies) with documented instances of unlawful or biased police practices that have also explored or deployed predictive policing systems during the periods of unlawful activity.
The Chicago Police Department, for example, was under federal investigation for unlawful police practices when it implemented a computerized system that identifies people at risk of becoming a victim or offender in a shooting or homicide.
The study showed that the same demographic of residents the Department of Justice identified as targets of Chicago’s policing bias overlapped with those the predictive system identified.
Other examples showed significant risks of overlap but because government use of predictive policing systems is often secret and hidden from public oversight, the extent of the risks remains unknown, according to the study.
“In jurisdictions that have well-established histories of corrupt police practices, there is a substantial risk that data generated from such practices could corrupt predictive computational systems. In such circumstances, robust public oversight and accountability are essential,” Schultz says.
“Even though this study was limited to jurisdictions with well-established histories of police misconduct and discriminatory police practices, we know that these concerns about policing practices and policies are not limited to these jurisdictions, so greater scrutiny regarding the data used in predictive policing technologies is necessary globally,” says lead author Rashida Richardson, director of policy research at New York University’s AI Now Institute.
Source: New York University