Comment: Why AI Still Can’t Match a Lawyer’s Palate

By Feargus MacDaeid, co-founder and chief strategy officer of Definely

As a former magic circle lawyer who’s now deeply immersed in the world of LegalTech, I’ve had a front-row seat to the evolution of our profession. From late nights contract analyses in mahogany-panelled rooms (well, perhaps) to driving forward the founding story of Definely, it’s been interesting to experience the evolution of both the potential, and the limitations of AI in the legal world.

 

For good reason, there’s recently been quite the stir about OpenAI’s latest model, with bold claims about its ability to rival human reasoning. It’s hard to be objective about this without sounding critical or downbeat, especially when there has been so much hope and promise over the past few years.

While I’m as excited as anyone about technological advancements, as somebody who relies heavily on it for my day-to-day, I can’t help but offer a more sobering view. Let me explain why, despite impressive progress, AI might not be ready to don the lawyer’s robes just yet.

The Unique Flavour of Legal Reasoning

First, let’s address something crucial: legal reasoning isn’t just about processing information or applying general logic. There’s a subtle difference between general reasoning ability and legal reasoning. In fact, it’s a crucial distinction, whether using LLMs directly or via a RAG architecture, that even the most advanced AI models struggle to replicate. Just type RAG into Medium and see what materialises. With so many proposed methods since the first RAG paper was published in 2020, and REACT, Self RAG, GraphRAG and more having made improvements to the retrieval process – have they actually improved legal reasoning?

Think about it – how many times have you heard someone jokingly say, “God, you’re acting like a lawyer”? That’s not just a quip; it’s an acknowledgement of a unique thought process. There’s a purpose to that statement. When we study law and start our careers, much of the time invested is actually in teaching us how to think like a lawyer. The manner in which lawyers think their way through a problem and the way they then go about solving that problem is often quite unique to the profession.

The 80% Solution: Why It’s Not Enough

Models like GPT, Claude, and LLaMA are undoubtedly impressive generalists. They might even get 80% of legal reasoning right based on their logic. But in our profession, that remaining 20% isn’t just the cherry on top – it’s often where the real meat of legal work lies.

In a field where a single word can change the entire meaning of a contract or verdict, 80% accuracy simply isn’t good enough. The deeper question is whether the improvement in these models is sufficient to justify the cost, especially when considering the potential for error propagation in complex legal scenarios (a process whereby one bad answer can cause a chain reaction in subsequent responses, thus causing the overall output to drift from the best or most likely source).

The Human Logic Algorithm

At Definely, we’ve been tackling this challenge head-on. We’ve developed what we call a “human logic algorithm”. It’s not about creating a super-powerful language model; it’s about understanding how lawyers actually think and work through problems.

We’ve spent hours in conversation, sharing knowledge, and looking at how a lawyer might literally work through the problem they’re trying to answer in a document. Once we worked that out, we focused on how we could parse our document, segment it, and then apply the logical steps to build a powerful retrieval logic for our GraphRAG system.

This approach is critical because, while you might have a super-powerful LLM, it’s only as good as the context it has or the context it’s given. Our primary focus has been centred on the thesis that when broken down into its components, legal reasoning can, to some degree, be translated into a retrieval logic that enables question answering over legal documents.

AI: The Sous Chef, Not the Head Chef

In the end, while ‘Strawberry’ and its AI siblings are undoubtedly impressive, there’s some serious testing to do.

With all the excitement of AI being able to ‘reason’ and therefore act as a lawyer would act, I’d hasten to emphasise that subtle but incredibly important difference. The discrepancy (because what self-respecting lawyer wouldn’t be looking for one) is the ability to ‘reason’ vs. the ability to ‘reason like a lawyer’. That’s inbuilt human wiring that I’d argue you cannot ‘build or train’ an AI on – as yet.

The future of LegalTech isn’t about AI replacing lawyers; it’s about creating tools that enhance our work by understanding the unique elements of legal reasoning.

So, while OpenAI’s ‘Strawberry’ might be a sweet addition to the LegalTech pantry, and I’m interested in seeing where these improvements develop, there’s a way to go before we can confidently say that an AI can develop genuine legal reasoning, or that this would be good enough to rely upon completely.

Leave a Comment

Your email address will not be published. Required fields are marked *