It’s Time To Stop Relying On AI-Based Crime Tools, For The Sake Of Civil Rights
Slowly but surely, more and more police departments are realizing that they must stop using algorithmic-based programs to identify who is most likely to commit violent crimes. These programs have long been a means of labeling people as “chronic offenders“; more as a reflection of personal opinion than actual science; and they have not only produced biased data, but also disproportionately targeted blacks and Latinos.
This software is arguably nothing short of frightening, and relies on using experiences and institutional racism to justify particular results; specifically, predictive algorithms that are trying to predict someone will commit another crime; all based on flawed artificial intelligence data. There is also a lack of transparency concerning how the software has come to its conclusions over the years. The software, of course, does not allow attorneys to cross examine those behind its results.
How Does It Work?
Artificial intelligence policing systems are sometimes also referred to as “predictive policing.” They essentially “forecast” where crime will occur by “mapping hot spots,” as well as flagging individuals “most likely to be involved in violence.” This includes software programs inputting, for example, connections, relationships, school schedules, social media, social events, and other commercially available data into systems in an effort to predict crime. Sometimes the software will also assign a “numerical threat score and color coded threat level” to an individual.
Concerns
Relying on this software in order to make arrests presents a number of civil rights-related concerns. For one, there is no question that certain search and seizure “entitlements” are already frequently abused in pursuit of racial profiling, and this will just justify more of that behavior. In addition, the data does not indicate that these systems are actually helping to protect communities; in fact, the “strategic subject lists” that the systems regularly produce and have been relied on by some police departments have apparently not reduced that particular type of violent crime at all. In addition, ultimately, bias is a huge problem with these systems: because predicting policing essentially relies on crimes reported by police patrols and/or communities–i.e. those that are already heavily policed–they inevitably lead to biased feedback loops and additional enforcement in areas that are already heavily policed; against individuals already heavily targeted; and all based on perceived versus actual threats.
Is there any way to integrate the use of these systems without encountering all of these problems? That remains to be foreseen; however, a good start would arguably involve pushing for greater transparency regarding exposure to the public as to how this data works, and possibly even creating an international commission dedicated to regulating its use when it comes to crime.
Contact Our New York Criminal Defense Attorneys Today
If you live in New York or New Jersey and have been accused of a crime, do not let yourself become the victim of criminal injustice. Contact our experienced New City criminal defense attorneys at the office of Phillip J. Murphy today to find out how we can help.
Resources:
forbes.com/sites/nikitamalik/2018/10/29/the-problems-with-using-artificial-intelligence-and-facial-recognition-in-policing/#7c75b3824f83
law.com/legaltechnews/2019/04/11/ai-based-crime-tools-arent-the-problem-the-biased-data-they-use-is/