AI or Excel on steroids? The ethics of AI as a tool, and its future regulation

From a technological perspective, AI is simply a series of computing techniques that are designed to mimic human behaviour. In that sense, AI has been around since the 1950s, Bristows IT partner Chris Holder told delegates at the 2019 Legal Leaders #GlenLegal IT Forum. “I would question whether what many CIOs are currently purchasing is true AI, or whether it is actually just excel on steroids,” he said, in a wide ranging and thoughtful talk on how AI is currently being used, how that will change, and what that means in terms of meaningful regulation and ethics.
Holder pointed to Joshua Browder, who developed a software programme designed to help people avoid paying a parking fine. “The press reported the story as the arrival of robot lawyers. But when you actually analysed it, this was really just a decision tree. This wasn’t a machine making decisions on its own.”
The key point of differentiation to focus on is whether the machine is providing functionality or actual advice. Holder pointed to a fitness device that takes heart rate readings and then tells you to pursue medical help for a heart attack. These machines that crossover from functionality to advice must be registered as medical devices, he explained.
Analogies can be drawn with how the use of AI may evolve in the legal sector. Is AI going to be a tool or will it actually provide legal advice? If AI is used as a tool and a lawyer uses the information it provides to give advice to a client, can the lawyer pass a claim onto the software provider if they are sued for negligent counsel?
The reality is the software provider would insist that the software had worked perfectly and under the terms of the licence that is all it is required to do. “There is a big difference between the provision of advice and the provision of technology to work to a specification,” said Holder.
Currently, the legal sector uses AI as a tool. AI functionality is used to help lawyers provide advice but not to provide advice itself. As technology develops, however, and machines start acting on their own accord, the onus will be on law to catch up quickly.
Ethics by design, in particular, will come to the fore. In its report to the Commission, for example, the European Parliament determined there should be a register of smart robots and envisaged a situation where all companies involved in the field should have a director of ethics. The report went as far as to suggest there may be a case for robots having legal personhood.
“This isn’t about giving robots human rights, as many people seemed to think. It is about whether a machine may have civil liability and you can sue it; or whether a machine can have criminal liability – but can a machine be put in prison?  To date, jurisprudence has focused exclusively on human creations. It has not dealt with machines that start making decisions by themselves. But it will need to.”
The Law Society, for example, is already considering (plus see report below) whether machines will need to be regulated in the way that law firms and individual lawyers are, or whether firms and individuals will be held responsible for the actions of those machines.
Holder added that lawyers operating in the UK have the advantage of flexible English common law. On the Continent, however, a robot directive is likely to apply a one-size-fits-all set of regulations around the entire industry.
“I would suggest that is the wrong approach as every sector will approach AI in a different way,” Holder said. “Do we need to legislate against autonomous weapons picking their own targets? Yes, of course we do. But does healthcare’s use of robots need to be treated in the same way? I would think not.”
AI has been through many peaks and troughs of hype and disappointment over the past few decades and, just like the wireless application protocols of the late 1990s, initial hype was followed by disappointment: but this technology paved the way for the modern smart phones and the social and commercial revolution that has followed. Much like that, AI is here to stay, said Holder. “This technology will change the way we work as lawyers and the way that society operates. We need to be ready as an industry to deal with the many changes this will bring.”
By Amy Carroll
See also:
https://legaltechnology.com//latest-news/police-are-innovating-in-silos-use-of-algorithms-in-the-justice-system-needs-urgent-oversight/