In a world where law firms are often criticised for being slow to adopt new technologies, we should applaud Allen & Overy for its wow-factor launch yesterday (16 February) of ‘co-pilot’ Harvey, which helps lawyers to conduct research and due diligence using natural language instructions, leveraging the most up to date OpenAI large language model. I’d be lying if I said I don’t have some serious reservations.
Founded in 2022 by former O’Melveny & Myers antitrust litigator Winston Weinberg and former DeepMind, Google Brain, and Meta AI research scientist Gabriel Pereyra, Harvey is a verticalised version of what I understand to be GPT-4, which has been trained on the entire corpus of the internet. By verticalised, I mean that Harvey has further trained the model with legal sector-specific data. Harvey, which in November last year received $5m in investment from OpenAI, has been working with a number of law firms – including A&O – in beta.
The model, which has now been rolled out by A&O across its 43 offices, can automate various aspects of legal work, such as contract analysis, due diligence, litigation and regulatory compliance. It can generate insights, recommendations and predictions without requiring any immediate training, which A&O says will enable its lawyers to deliver faster, smarter and more cost-effective solutions to their clients.
Depending on who you talk to, this is either a disaster waiting to happen (for a multitude of reasons I’ll come on to) or the very first, genuinely game changing innovation from the legal sector (for a multitude of reasons I’ll come on to). Wherever you sit on the fence, there is no doubt it is quite mind blowing.
The beta Harvey project was led by Allen & Overy’s Markets Innovation Group (MIG) – a team of lawyers and developers tasked with disrupting the legal industry.
We’re told that at the end of the trial, around 3,500 of A&O’s lawyers had asked Harvey around 40,000 queries for their day-to-day client work. MIG head David Wakeling said in a statement yesterday: “I have been at the forefront of legal tech for 15 years but I have never seen anything like Harvey. It is a game-changer that can unleash the power of generative AI to transform the legal industry. Harvey can work in multiple languages and across diverse practice areas, delivering unprecedented efficiency and intelligence. In our trial, we saw some amazing results.”
Harvey – unlike most AI-based legal technology offerings to date – really doesn’t require a lot of work to get it live. A spokesperson for A&O told me (Wakeling is away this week – why law firms make big announcements during school holidays is beyond me): “One of the brilliant things about Harvey is that you don’t need to train people to use it; you just need to provide them with a short, simple list of parameters and tips (which we’ve done). So it’s quick, pain-free, and inexpensive to roll-out.”
Harvey is integrated into A&O’s legal workflows and is able to generate legal documents, not just text. Asked what processes are in place to pick up errors, the spokesperson said: “A&O, Harvey and OpenAI have this front of mind. There are safeguards in place at the model level, which will continue to improve over time and, the subject matter itself (i.e. legal) is one where this is inherently less of an issue than in other domains due to the nature of what will be prompted. Importantly, there will be a lawyer in the loop reviewing all output to screen for this.”
Client data
One of the big initial questions surrounding the launch is whether Harvey is being given access to client data. The press release from A&O suggests yes, saying: “Harvey is a platform that uses natural language processing, machine learning and data analytics to automate and enhance various aspects of legal work, such as contract analysis, due diligence, litigation and regulatory compliance. Whilst the output needs careful review by an A&O lawyer, Harvey can help generate insights, recommendations and predictions based on large volumes of data, enabling lawyers to deliver faster, smarter and more cost-effective solutions to their clients.”
Sharing client data with Harvey would be problematic for a number of reasons, not least because A&O hasn’t formally consulted clients.
However, the spokesperson for A&O told me: “Along with the quality of our legal advice, maintaining client confidentiality is a key priority. That will not change. Harvey is designed for the legal industry and as a result has myriad ways to ensure client confidentiality. Harvey will not interact with client data until we know it is safe to do so.”
The spokesperson added that A&O is in regular contact with its clients, they just haven’t done a formal consultation, as they wouldn’t with any new tool they start using internally.
But Harvey is not just any new tool.
What contracts is Harvey trained on and useful for?
Harvey can be used for any sort of background research and first pass drafts. The examples I have been given is that you can ask Harvey questions such as:
- “I’m at a German law firm and am going to present to an Indian bank about the EU market abuse regime. Suggest the skeleton of a 5-slide presentation, and for each slide include three bullet points on its content.”
- “Draft me an email to my Silicon Valley private equity client regarding what is the difference between what constitutes a material adverse effect in an M&A deal under Delaware law as compared to New York law.”
- “Draft me a research memo examining how the Supreme Court’s Janus decision affects private sector unions.”
The question is, will Harvey get the answer right? As reported by the FT, the AI assistant comes with a disclaimer that its use should be supervised by licensed legal professionals, and it does still “hallucinate”, which is when the programme can produce inaccurate or misleading results.
Accuracy and the fear of hallucinations
OpenAI’s language model has been shown to confidently provide answers that are completely wrong. Alex Hamilton, CEO and founder of Radiant Law, told me: “The underlying technology is prone to hallucinations (aka making stuff up, not just being wildly wrong); does not show its workings (it’s a black box); does not show its confidence levels; and lulls people into a false sense of security by answering fluently.”
He adds: “We are then up against very human problems of being a) bad at checking things that seem fine and b) not noticing things that are missing, which is why checklists are such an important tool.”
This is a fear shared fairly widely in the market. Law firm technology heads are not keen to be seen to publicly challenge A&O, and many are looking at their own GPT-based solutions, but one Am Law top 100 CIO told me privately: “It’s not the pace that concerns me. We are all trying to move faster, and the pandemic reset expectations on what is possible. However, there are certainly a lot of unknowns in the models. That’s why I worry about the user behaviour aspect and how it is put into service.
“It is smart if they can make sure that people don’t trust it blindly. These models can do some pretty interesting stuff to convince you of the wrong thing. But if the user doesn’t know what they don’t know, it will be really hard to tell that what the machine says is trash.”
The CTO of a UK top 50 firm added: “You are told at law school not to use Wikipedia and given years of legal training only to go to one of the leading law firms and be given a research tool that hallucinates. The accuracy problem is a five-to-ten-year problem, and I don’t know how they have sorted that out in a few months. It has the potential to cause damage when some poor junior lawyer misses the fact of the case.”
However, providing a counter argument to this, tech investor and founder of Killer Whale Strategies, Zach Abramowitz told me: “Junior lawyers miss things every day. They will miss less now.”
Existential threat
Allen & Overy’s adoption of Harvey raises some fairly existential issues for the legal sector. While the magic circle firm is being progressive and leading the market, it is also helping to train a model that will become a serious competitor.
The UK top 50 CTO said: “In a move to become progressive are they not giving away their competitive edge? The accuracy of the AI will improve – helped by law firms training it – and the cost of intelligence will turn to zero. The rush to be innovative will be our downfall.”
But Abramowitz disagrees, commenting: “GPT isn’t like anything we’ve seen in the past, it’s more akin to the introduction of the iPhone. Clients will be using it themselves. They wouldn’t come to a law firm and say, ‘Why on earth aren’t you using this particular eDiscovery tool?’ but they will start coming to legal and saying, ‘why are you not using ChatGPT?’”
He adds: “I often make fun of law firm press releases, but this one is an exception.”
One thing you can be sure of, is that Allen & Overy will not be the only firm to make an announcement that it is incorporating generative AI within its practice. Other firms reported to be working with Harvey include Quinn Emanuel Urquhart & Sullivan.
Speaking to me earlier this month in a yet to be published interview about GPT-3 /GPT4 and its impact on the legal sector, Professor Daniel Katz from Illinois Tech – Chicago Kent College, said: “In the next 12 months, probably less than 12 months, there will be many other large language models in a competitive market. So GPT will not be the only thing we’re talking about by the end of this year.”
Within days of that conversation, Google announced the launch of its GPT rival Bard.
What impact all of this will have on existing legal technology providers is a topic for another day. The toothpaste, as they say, is well out of the tube.
caroline@legaltechnology.com
Hi Caroline Great article I believe AI is not If but when
Those firms that take advantage of new technology will have a competitive edge Firms that don’t have to be introspective as to Why they are sabotaging their future? Is it based on the unknown. Is it fear? Lack of knowledge. Luddites destroyed machinery circa 1812 as they feared for their jobs Today AI won’t replace you. A person using AI will replace you. Clients don’t care if the service they receive is helped by a robot They just want the job done accurately ,quickly and cost effectively
A total disaster waiting ro happen. CGPT is very, very unreliable. Dangerously so.