A leading professor of computational legal theory at University of Edinburgh has warned both lawyers and technologists against repeating the mistakes of history and setting up artificial intelligence (AI) to fail, as happened 20 years ago, when AI fell into ‘a long winter of discontent’ because it couldn’t measure up to the hype.
In a wide-ranging talk in which he concluded by flagging up a number of potentially transformative current AI developments, Professor Burkhard Schafer took the Legal Leaders IT Forum (LLIT) in Gleneagles back to the 1980s, when AI was the ‘new big thing.’ “Suddenly, computers were good at playing chess. And, as you all know, playing chess is the ultimate pinnacle of human intelligence,” he said wryly. “So, because a computer can play chess, inevitably that meant that within two or three years’ time, they would replace humans in all other activities as well. Already then, people started to write about the end of the legal profession, or at least its transformation beyond recognition. It hasn’t happened yet, and that is partly a fault of our communities, both legal and technology, because what we were aiming for at the time was something like a robo-judge.”
During the ensuing winter of AI even the name AI fell out of favour. Professor Schafer said: “We quietly more or less dropped that word, because it was tainted goods. Research continued, but they didn’t call it AI any longer, instead focusing on the specific activity studies – machine learning, information retrieval, decision advice systems – simply because the over-hype of the technology and the disappointment that that had created, made it problematic to enthuse people.”
During that period there was little-to-no engagement from legal professionals, but a notable turning point came at a workshop hosted by legal knowledge based systems forum Jurix in December, where half of the speakers came from legal practice and had developed their own mini AI systems. Professor Schafer said: “That sort of interest and activity has simply not happened before.
“So suddenly there is a really remarkable change which I’m very excited about, but also slightly wary about because, again, we should be learning from history.”
So what were the technical shortcomings of AI the first time round? One was the knowledge acquisition bottleneck. “Because we tried to model explicitly quite a lot about legal reasoning in its context, it took a very, very long time to get the knowledge out of the employees into the machine,” said Professor Schafer, adding, “By the time that was completed, technology had moved on, or you had moved on, or the business environment had moved on.”
Early systems were very inflexible and lacking robustness – they had little ability to deal with unforeseen situations or transfer their knowledge from one field to another – which is still an issue today. “Deep Blue plays chess, but only chess. A Google car drives, but only drives. Here is a danger in our attitudes and perceptions. We learn that humans who are good at chess often are also good at strategic thinking elsewhere, and good at driving means having hand-eye coordination and special reasoning that also helps with football. But we cannot make the same inferences with intelligent machines.”
Another problem was consistency maintenance. “We built these huge legal expert systems that modelled lots and lots of legal rules and then Parliament was so inconsiderate to enact a new rule, and then what do you do? By that time, we, the academics, had moved on, people in the organisation very often didn’t have the skill, even, to operate this technology because it was written in a language only researchers could comfortably write in.”
Further issues were that early systems lacked any sort of creativity. “You don’t want to be too creative with the law or your clients’ futures,” said Professor Schafer. “You don’t want to go to them and say, ‘Look, yes, you have a really, really difficult case, but I – and only I – found that one interpretation of the law that no one has ever seen of that specific provision. Go ahead! Risk it. I am 100% behind you, these High Court judges, what do they know!” But a little bit of creativity is needed if you are looking for that competitive edge.
One of the real game changers for AI this time round has been the development of machine learning. With reference to issues such as consistency maintenance, Professor Schafer said: “We are not any longer dependent on the static snapshot pictures of the legal system – they upgrade themselves through machine learning and through the flexibility that that brings.
“Once you added this, then it really became data science – and that, for me, is one of these big game changers because it dealt with the problem of flexibility, it dealt with the knowledge acquisition problem, that we suddenly could say we don’t need to specify everything in advance – these machines, these algorithms, these AIs, can learn by themselves, update themselves, are just more agile to whatever is coming.”
Examples of interesting AI projects include a Legal Knowledge Interchange Framework-based Eurobonds transaction framework. This was created by Professor Schafer’s Masters student Orlando Conetta, before he was hired by Pinsent Masons, where he is now head of research and development. Professor Schafer said: “I had high hopes for Orlando. I was hoping he was going to write my articles for the next three years and pay us, in addition, PhD student fees – and then Pinsent Masons gave him, amazingly enough, a proper job to develop this idea further.”
Another well-known example is Lex Machina, which uses machine learning to analyse docket submissions, court decisions and other materials in the field of patent law in order to predict outcomes. “Here we have a new type of company,” Professor Schafer said. “They are not a traditional software company. They are not a traditional law firm. It is a really genuine interdisciplinary company where you have both legal expertise and computing expertise to develop a service in a form did not exist before. And that is, I think, slightly more exciting, a really new field, a new chance for cross-overs between the legal industry and the software industry. And you really need to get both. You can’t do what they are doing without also having legal expertise. And you can’t do what they are doing without having computer science expertise.”
Other examples of legal and computer science crossovers include TrademarkNow, which uses machine learning tools to data-mine trademark text. And while automated cars don’t often come up in the context of legal AI, they are an example of how law and technology will intersect going forward. Professor Schafer said: “If it’s going to work, then it needs to have not just a driver, but in the back a data protection lawyer who tells the car what sort of information it can disclose about itself to other cars. And it also needs a contract lawyer – because the car will automatically pay things like the congestion charge. Obviously it needs also a lawyer who knows about road traffic law a lot. So, in order for that machine to work appropriately, it needs to have a basic ability for legal reasoning.”
Looking at the much bigger picture, the final piece of the puzzle for AI is creativity, empathy and legal reasoning – the ability to explain why the computer has taken a course of action.
Outside of law, in the world of art, Simon Colton’s project The Painting Fool is showing us the possibilities. “It’s a painting project, a programme, and one of the things it gets increasingly good at is emotional recognition,” said Professor Burkhard. “So it realised that the person it was doing a portrait of was angry, unhappy, probably depressed, simply from the facial expression. There wasn’t a rule, a pre-fixed rule in the system that told it that, it had simply learned from lots and lots of images that this is not a happy bunny. And it had also learned that if people are not happy, paintings in bright colours and with a big smile is not the appropriate response. And it came up with an image in dark colours, muted colours, a sort of cry of agony, to express that sort of feeling.”
Emotional recognition is one of the big research fields in AI, including in data mining and predictive coding. “Give me all the documents where someone is really, really angry. If you want to show that there is a hostile environment towards women in a big company, that is more important than looking for specific key words. You want emotion, the raw emotion expressed,” said Professor Schafer.
Having read the Guardian at a time when there was lots of bad news, The Painting Fool not only painted a sad picture but explained how it reached those choices. Professor Schafer said: “One of the things we noticed that old AI was bad at, but which is essential to legal expertise, is giving reasons explaining why it did what it did, rather than just doing the right thing.
That is what I think sets law apart from many other disciplines. Doing the right thing is not enough. We need to be able to explain why we did the right thing; that the reasons were the right reasons.
“And that is the third pillar of what I see as the big developments. Emotion and emotional recognition, individual styles, individualised AIs that respond to their external input forming a sort of identity, but also explanation-centric computing, being then able to say why they did what they did.”
“This is coming here from the art world, a brilliant vision of the future, a really transformative future in the field of AI, being able to deal with human emotions, being able to deal with explanations, with reasons, and being able to individualise what you are doing, rather than following a boilerplate. I think this might also matter for law, rather than just art.”
This article first appeared in the latest Legal IT Insider. You can register on our website to receive the free monthly issue by email.