Heavyweight AI research group says using biometric data to predict criminality is inherently discriminatory as it urges study to be condemned

A heavyweight group of AI experts from the likes of Google and Microsoft plus researchers and academics has urged against the publication of a new study that claims to identify or predict criminality based on biometric or criminal legal data, saying that such studies are inherently racially biased and naturalise discriminatory outcomes.

The publication in question – A Deep Neural Network Model to Predict Criminality Using Image Processing – is planned for publication by Springer Publishing. But in a letter dated 22 June to Springer Editorial Committee, the group of around 1,700 expert researchers and practitioners says: “We urge the review committee to publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it.” The group further says they want: “Springer to issue a statement condemning the use of criminal justice statistics to predict criminality, and acknowledging their role in incentivising such harmful scholarship in the past.” Also that: “All publishers refrain from publishing similar studies in the future.”

The signatories, who include professors from multiple universities including Washington, Massachusetts, Harvard, NYU and MIT as well as Berkeley School of Law, say that the upcoming publication warrants a collective response “because it is emblematic of a larger body of computational research that claims to identify or predict “criminality” using biometric and/or criminal legal data.”

The open letter says: “Such claims are based on unsound scientific premises, research, and methods, which numerous studies spanning our respective disciplines have debunked over the years. Nevertheless, these discredited claims continue to resurface, often under the veneer of new and purportedly neutral statistical methods such as machine learning, the primary method of the publication in question.

“In the past decade, government officials have embraced machine learning and artificial intelligence (AI) as a means of depoliticizing state violence and reasserting the legitimacy of the carceral state, often amid significant social upheaval. Community organizers and Black scholars have been at the forefront of the resistance against the use of AI technologies by law enforcement, with a particular focus on facial recognition. Yet these voices continue to be marginalized, even as industry and the academy invests significant resources in building out “fair, accountable and transparent” practices for machine learning and AI.”

It adds: “Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world. These research agendas reflect the incentives and perspectives of those in the privileged position of developing machine learning models, and the data on which they rely. The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalize social hierarchies and legitimize violence against marginalized groups.”

In the original press release published by Harrisburg University, researchers claimed to “predict if someone is a criminal based solely on a picture of their face,” with “80 percent accuracy and with no racial bias.” The open letter to Springer says: “Let’s be clear: there is no way to develop a system that can predict or identify “criminality” that is not racially biased — because the category of “criminality” itself is racially biased.”

It concludes: “Recent instances of algorithmic bias across race, class, and gender have revealed a structural propensity of machine learning systems to amplify historic forms of discrimination, and have spawned renewed interest in the ethics of technology and its role in society.

“The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world.”

You can read the letter in full and sign the collective letter to Springer here: https://medium.com/@CoalitionForCriticalTechnology/abolish-the-techtoprisonpipeline-9b5b14366b16