The Law Society of England and Wales today (14 June) launched a public policy commission review of the impact of technology and data on human rights and justice. According to new Law Society vice president and commissioner Christina Blacklaws, the review will examine issues of unconscious bias, ethics and rights, as well as what kind of oversight is required and what the financial and social cost could be if algorithms are skewed. The commission will initially look at the use of AI in legal practice and in society, by the police and prison services, where the following examples of ‘new technology’ are in use:
– Durham Constabulary have used an artificial intelligence system to inform decisions about whether to keep a suspect in custody. They use an algorithm to assess low, medium and high risk of reoffending – so that arrestees forecast as a moderate risk can be made eligible for a programme designed to reduce re-offending
– Mathematicians and social scientists in the US have developed a crime prediction tool in collaboration with the Los Angeles Police Department. ‘PredPol’ has now been used by Kent Police for crime prediction hotspot mapping
– The Metropolitan Police and South Wales Police use facial recognition technology at public events, music festivals and demonstrations to cross-reference people already on watch-lists.
Blacklaws said: “Big data and algorithms already augment human capabilities for analysis and prediction beyond anything previous generations could have imagined. Their use could – and sometimes does – keep us safer, preserve scarce resources and expand the reach of increasingly stretched law enforcement.
“But the design, sale and use of algorithms to deliver justice or maintain security also raises questions about unconscious bias, ethics and rights. Further potential risks may emerge when an algorithm is developed by a business focused on profit rather than by an organisation focused on delivering justice.”
The Law Society commissioners will take oral and written evidence from tech, government, academics, commercial actors and legal and human rights experts to explore an overarching question: what framework for the use of big data and algorithms could protect human rights and trust in the justice system?
Blacklaws added: “The questions we will explore include: What are the financial and social costs if algorithms are skewed? When is the use of algorithms and big data appropriate? What kind of oversight do we need? How do we ensure that the data used is correct and free from bias? And, how do we ensure decisions can be accessible, reviewed or appealed?”