Why a computer could help you get a fair trial – The Guardian

In 1963, an American attorney named Reed Lawlor published a prescient article in the journal of the American Bar Association. “In a few years,” he wrote, “lawyers will rely more and more on computers to perform many tasks for them. They will not rely on computers simply to do their bookkeeping, filing or other clerical tasks. They will also use them in their research and in the analysis and prediction of judicial decisions. In the latter tasks, they will make use of modern logic and the mathematical theory of probability, at least indirectly.”

I cannot locate any contemporaneous accounts of how this insight was received in US legal circles, but it’s easy to imagine how it must have gone down in the Inns of Court over here when the news about computers eventually reached these shores. “The sadness about the bar nowadays,” wrote John Mortimer QC in 2002, “is that the Rumpoles are dying out, to be replaced… by greyish figures who think that the art of advocacy has been replaced by computer technology.”

Now spool forward to October 2016 and to Gower Street, a stone’s throw from Gray’s Inn, where a group of computer scientists is huddled in a laboratory in University College London. They are tending a machine they have built that can do natural language processing and machine learning and, in that sense, might be said to be an example of artificial intelligence (AI).

The machine has an insatiable appetite for English text and so the researchers have fed it all the documents relating to 584 cases decided by the European court of human rights (ECHR) on alleged infringements of articles 3, 6 and 8 of the European convention on human rights. Having ingested and analysed this mountain of text, the machine has been asked to predict the judgment that it thinks the court would have reached in each case. In the end, it reached the same conclusion as the judges of the court did in 79% of the cases.

Given the complexity of the cases involved, this seems (at least to this lay observer) to be a remarkable outcome. Article 3, for example, prohibits torture and inhuman and degrading treatment, article 6 protects the right to a fair trial, while article 8 provides an individual with a right to respect for their private and family life, home and correspondence. These are all areas that have compelling moral and ethical dimensions as well as a strong evidential basis. If I had been asked before the experiment to predict the accuracy of the machine’s analysis, I would have said that 10% would be a good result. How wrong can you be?

Judges of the European court of human rights.



Judges of the European court of human rights. Photograph: Frederick Florin/AFP/Getty Images

That the UCL machine was able to do so well suggests that ECHR judgments depend more on non-legal facts (easier for machines to assess) than on legal arguments. If that is indeed the case, then legal philosophers will see the experiment as grounds for reopening the discussion about what human judges are for – and what they are good at. After all, as one commentator puts it: “If AI can examine the case record and accurately decide cases based on the facts, human judges could be reserved for higher courts where more complex legal questions need to be examined.”

The experiment will doubtless spark dystopian fears about machines making decisions that have life-changing consequences for humans. In that sense, it will be yet another replay of the ongoing debate about whether AI will replace or augment human capability. Judge Richard Posner, probably the world’s most cited legal theorist, takes the latter view. “I look forward to a time,” he wrote, “when computers will create profiles of judges’ philosophies from their opinions and their public statements and will update these profiles continuously as the judges issue additional opinions. [These] profiles will enable lawyers and judges to predict judicial behaviour more accurately and will assist judges in maintaining consistency with their previous decisions – when they want to.”

Besides, the new EU general data protection regulation explicitly states that individuals have the right not to be subject to a decision when it is based on automated processing. There has to be a human in the loop somewhere. (It is to be hoped this will also apply in the post-Brexit UK.) What the landmark UCL experiment points to, therefore, is not a future in which a robot decides whether you go to jail, but one in which an AI-assisted human judge makes a more consistent and informed judgment in your particular case.

Comments

Write a Reply or Comment:

Your email address will not be published.*