Editor Caroline Hill speaks to a raft of senior legal technologists to find out how GPT-3 will impact the legal sector.
So, thank goodness the world has something other than COVID-19 to talk about and it’s called GPT-3, a machine learning system that has eaten the internet and about which we can say two things, or perhaps three. 1) The long-term possibilities for legal are big but, spoiler alert, it is nowhere near capable of ‘taking over’ from lawyers. 2) GPT-3 will nonetheless no doubt start another wave of ‘end of lawyers’ debates, snore,. 3) A legal tech author could probably use GPT-3 to write their next book without anybody being able to tell the difference.
The million-dollar question is, given the off-the-scale excitement around GPT-3, precisely where and how this next gen AI program will impact the legal sector, and, I guess to a certain extent, where it is an opportunity or a threat.
The quick ‘what is GPT-3’ bit
Ok, so you’d have to be living under a rock or avoid social media not to have heard of OpenAI GPT-3 (Generative Pre-training Transformer) but some of the explanations as to what it is over complicate things. It is essentially predictive text and the best description I’ve seen is that it allows humans to communicate with machines in simple English. By describing what you want, you can get code, completed sentences, layout and simple reasoning. Aside from any relevance to the legal sector, GPT-3 is mind blowing. If you want to read a generalist article that will make your jaw drop, read ‘OpenAI’s GPT-3 may be the biggest thing since bitcoin.’
While the architecture of GPT-3 isn’t brand new, it has been fed 45TB of text data by quite literally absorbing the internet, including Wikipedia, becoming the largest AI model in history. Dale Markowitz, an applied AI engineer at Google, writing for The Next Web, marvels that GPT-3 can perform specific tasks without any special tuning, meaning it can be a translator, poet, or author with fewer than 10 training examples.
But it is not smart and Stevie Ghiassi, CEO of online communication tool Legaler and president of the Australian Legal Technology Association (but now based in Los Angeles) clarifies: “This is a huge leap in terms of how it looks at language and how it can create bodies of text but we haven’t got to artificial general intelligence yet where it is close to reasoning by itself.”
Who in the legal sector has their hands on GPT-3?
GPT-3 is currently in private beta – you can apply for a license now, but most people I’ve spoken to are still waiting to play with it. One company that already has access is legal and justice technology product design development firm Theory and Principle, founded by CEO Nicole Bradick. Bradick told me: “We got private beta access last week and have been playing with it to see what, if anything, we can use it for. We’ve done some internal demos to work out use cases for our clients.” She adds: “Obviously, it’s very impressive and the volume of data is very impressive, but it has some big flops. It’s kind of a dumb system. It’s very good at synthesising large amounts of text and putting it together so the natural language processing possibilities are huge. But we asked it ‘who is Nicole Bradick’ and it came back with ‘a judge in Philadelphia.’ There is no judge called Nicole Bradick in Philadelphia, and if you want something that has a specific answer Google is still your best bet.”
Legal research platform ROSS has been working with OpenAI to refine their API product and you can read my interview with ROSS co-founder and CEO Andrew Arruda below. He states categorically that GPT-3 will bring about major changes in the legal sector for the better.
Rival legal research platform Casetext is also working with OpenAI on its API and it is no coincidence that legal research is one of the areas that should feel the impact of GPT-3 the most.
You can request access here in order to integrate the API into your product. OpenAI chose to release an API instead of open-sourcing GPT-3 in part to help it pay for ongoing research but also to make the AI system more accessible to smaller businesses and organisations. The company is offering free access to the API through mid-August for its private beta while it determines its longer-term pricing. Using the link above you can describe your use case or product to join the waiting list.
So cut to the chase, what is GPT-3’s relevance to the legal sector?
It’s very early days for GPT-3 and there are more unknowns than knowns but there are some obvious potential use cases, as well as areas where it is likely to be extremely unhelpful.
Search and the democratisation of legal research
One of the biggest things to watch is GPT-3’s impact in improving search within the legal research space. iManage RAVN’s global product management lead Alex Smith told me: “My understanding is that they have had unbelievable access to data but in essence GPT-3 is predictive text. Casetext is one of the APIs, so you can find similar passages in cases not just based on similar words, without going down the Recommind route of creating a massive thesaurus. It may break down search queries and bring back the things close to it.”
He adds: “This goes towards the Q&A capability that LexisNexis have been working on for years. Lexis Answer enables people to put their questions in the search engine but it traditionally kills the ‘if and when’ words and brings back the keys words so you lose some of the context – how ‘is’ is semantic to ‘what’. GPT-3 breaks down the linguistic barriers.”
What does this mean for legal research platforms such as ROSS and Casetext, which curate public caselaw? Views are mixed.
Ghiassi says: “Casetext and ROSS would be adamant that this type of technology doesn’t replace what they are doing but potentially it creates a building block. Much like electricity where at first it’s mind blowing and then eventually everyone has it, eventually we will all have access to this technology. That’s significant because in the legal sector a very low number of companies are actually using AI systems.”
But what those legal research platforms provide is training, UX and confidence in the output (by involving humans) and Bradick says: “The path to training it is easy but the more training the better and I don’t think GPT-3 will eliminate the need for either of those research platforms – it has a lot of information but it’s not smart. The training that those guys are adding is huge and right now I can’t see that it’s going to be replaced.”
She adds: “The other thing to consider is that the users of legal research are lawyers and I don’t think there is going to be a case where the lawyer is comfortable to rely on AI – the research platforms feel trustworthy.”
Arruda is excited about the potential democratisation of law that GPT-3 brings into play, and tells me: “Now, thanks to GPT-3, you don’t have to be a machine learning scientist with millions of dollars to access an extraordinarily powerful natural language model. You just need to be an innovative tinkerer with access to OpenAI’s API.
“Smaller problems — and boring problems and problems faced by people other than the well-heeled — are suddenly feasible and attractive targets from an economic point of view.”
He adds: “In our own corner of the ecosystem, products in the legal research space will make it easier for people interacting with the legal system to understand and write about the law. This is a democratization. Powerful natural language models allow users to find relevant information very quickly and with high precision, even in dense legal text, and provide useful recommendations of what language to use when drafting, for example.”
Converting legalease to plain English (or vice versa)
One obvious use case for GPT-3 is that it is capable of turning legalease into plain English.
Commenting on LinkedIn, Nir Golan, general counsel and head of legal ops at attenti said: “I really like the fact that it can convert legalese into plain language and vice versa. Some great use cases in A2J, corporate self-serve, work with business customers etc.”
Michael Tefula, as associate at early stage technology investment company Downing Ventures, posted on Twitter on 21 July: “Just taught GPT-3 how to turn legalease into simple plain English. All I gave it were two examples. Might build a term sheet and investment document interpreter out of this.”
Going the opposite way, Francis Jervis from Augrented fed GPT-3 two examples as prompts to ‘write like an attorney’ GPT-3 did the rest. Note: It even includes relevant statutes if you mention a jurisdiction.
Ghiassi said: “What could this mean for #accesstojustice? Can everyone finally have a legal assistant to ‘translate’ the law and make it understandable?”
Access to justice
The A2J space is one where GPT-3 seems to present some of the most exciting possibilities but how far can it go? Will it mean chatbots are more capable of providing scalable, accurate and potentially free legal advice to those who most need it?
Kira co-founder and CTO Alexander Hudek told me: “Technically you could use GPT-3 on a chatbot. If you had a very large corpus of topics or legislation, then giving it a question in natural language and asking it to retrieve text is probably something it’s quite good at.” But he adds: “I don’t think the technical part is the issue, it’s the legal issues. The problem is that if you’re doing a chatbot for a fun and it says something wrong, you can say ‘that’s cute’ but if it made a mistake even 1% of the time is that an acceptable level of error? So, there is an ethical issue. If 1% of people get the wrong answer and act on it and get a bad outcome is that acceptable? That’s why technology adoption in general for these situations is difficult. If you had a human in the loop vetting things, maybe that would change things.”
He adds: “There are things it can do fantastically. Research over public data it’s very good at. There are some application areas where it will be mind-blowing. It can help people to draft emails, although you might need a person to remove certain parts and again be in the loop. But an automated solution I’m not sure about.”
Bradick agrees, commenting: “Right now I wouldn’t feel comfortable leveraging this as a sole source of information: it’s going to be wrong without training but it does seem like a modest amount of training gets you a good answer. How close do we feel to the margin of error that a lawyer would also have? Chatbot functions are created by lawyers and they are most likely to be correct but going to margin of error where something is not thoroughly described is always a risk – how much risk does this bring and what benefit do we get in exchange?”
There are already projects underway in the access to justice space to train an algorithm to spot legal problems described in natural language – Suffolk Law School and Stanford Law School’s Learned Hands is one, led by David Colarusso and Margaret Hagan respectively. Ghiassi says: “If you feed the system masses of data it becomes more sophisticated and a big leap for access to justice will be to get to the stage where basic research is handled by chatbots, which at the moment are largely just using decision trees.” Access to the right data is as important as AI itself.
Access to data is, not surprisingly, a recurring theme and when it comes to how GPT-3 might impact the contract review space, Hudek says: “One problem that we’re facing is that the type of data we need isn’t always public. Has GPT-3 been trained on EDGAR? I don’t know.”
This is a point reiterated by Diligen’s COO and co-founder Laura van Wyngaarden, who I communicated with over LinkedIn, asking her what impact she foresees from GPT-3, including whether it’s a threat or something they can work with. She told me: “GPT-3 can apply to many subject areas, including law. Within the legal domain, it can foreseeably assist with drafting contracts, memos, and emails to clients, with legal document translation, and legal research. We specialize in contract review work at Diligen and we see GPT-3’s usefulness in helping lawyers get answers in natural language about the contracts they are drafting or reviewing as well as identifying red flags in their agreements. However, the degree to which GPT-3 will be useful in legal is unclear. OpenAI’s implementation of the model is trained on a wide range of data from the internet. This is great for a variety of use cases, but in legal, the best legal analysis is often stored within the document management systems at law firms. If that kind of data is included in the dataset, the reliability and applicability of GPT-3 to the kinds of tasks lawyers do everyday would likely increase dramatically.”
Privacy and confidentiality
One major issue with GPT-3 is that it is designed to memorise and share content, which Hudek says: “If I said to our clients that our system will memorise your content and show it to everyone, it would be game over. That’s the challenge with deep learning: we’ve experimented with smaller models such as GPT-2 and we’ve tracked them to see how they compare and on smaller models they match our technology but at a pretty big cost in terms of training and privacy issues. Because of those issues we haven’t adopted GPT-2 and GPT-3 makes those issues worse.”
One of the other big concerns around GPT-3 is bias: it will share all of the bias of the internet around gender, race and religion and as we all know, the internet can be an extremely racist, discriminatory place.
You might wonder whether the sheer volume of data on the internet would dilute bias but Bradick points out: “It has read every book and the history of the world to now has largely been written from the perspective of a white man – we can expect it to have bias.”
Hudek says: “You’re going to dilute it to an extent, but the internet is biased – the reason they are doing a limited roll out is they are trying to find ways to deal with bias. Normally you’d deal with bias by fixing the data but you can’t fix the internet, so they are trying to come up with other ways.”
OpenAI has launched a toxicity content filter API, which rates all content created by GPT-3 on a toxicity scale from one to five, and anything above a two is flagged for moderation. It is carefully monitoring use cases for its APIs and says it will terminate API access for obviously harmful use cases.
However, one of the issues with GPT-3 not being open source is that it is harder to interrogate the results. Ghiassi says: “It’s similar to blockchain: if it’s not open source, how can you know the bias – it’s a black box.”
The New Zealand government this week became the first to launch a charter to say that they will not rely on algorithms for important decisions that affect people’s lives unless they understand the data and without going off on too much of a tangent, this should surely a step emulated around the world.
Hype or game changer?
The social media hysteria over GPT-3 even drove Sam Altman, who co-founded OpenAI with Elon Musk, to warn the world at large to get a grip, commenting on Twitter on 19 July: “The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.”
Striking a note of caution too is professor Daniel Katz, who has requested API access on behalf of The Law Lab @Illinois Tech – Chicago Kent College of Law, where he is a director. I asked Katz what he thinks of GPT-3 so far and he said: “There a few demos out there which look promising (but lots of things look good in a demo but are not robust in the end). I think the open question is always to what extent we can project any general advance in NLP to the domain specific challenges here in law-law land.”
But he added: “On the positive side, I think every major and even minor advances present opportunities for us here in legal. It took a while but for example you see many of the products in our space taking advantage of Word Embedding methods (word2vec, BERT, ELMo, etc.)”
If you’re looking for a useful benchmark in terms of hype Bradick says: “In the universe of hype for new technology we always want to be cautious but I do think this will have significantly more impact than blockchain – it’s worthy of the hype.”
Adrian Camara, CEO at Athennian observed: “It will expand the gulf of performance between organizations with and without well-structured data.”
And while there are obvious limitations, caveats, and concerns not to get ahead of ourselves, it’s important not to underplay exactly how far GPT-3 has come. Ghiassi points out: “Computing power doubles every three and a half months and we’ve had three hundred thousand times more computing power in the last seven years. People are realising that neural networks resemble the brain more than people first thought. AI is not reasoning yet but GPT-3 shows that the models are getting more expansive – it has 175bn parameters and is becoming far more sophisticated. To put it in context the human brain has 100 trillion parameters and there is a long way to go, but I can see it doing a lot of things that humans can do.”
ROSS Intelligence co-founder Andrew Arruda says GPT-3 “will change the legal sector”
“GPT-3 is a huge machine learning model that has the extraordinary capacity to predict language in a wide variety of contexts. It can generate articles, summaries, working code, and poetry; it can simplify legal language (making it more understandable to laypeople), generate legal language from simple descriptions of fact, generate requests for admission, and more; ROSS has used it to create exciting prototypes that summarize meetings between lawyers into a memo format and compare legislation across jurisdictions in insightful new ways.
But the most amazing part of GPT-3 and what will be its largest impact on the legal sector can’t be found on Twitter or in dev branches on GitHub. GPT-3 will change the legal sector (and many other sectors) because it lowers the barrier to using a broad class of natural language tasks in a variety of applications.
Now, thanks to GPT-3, you don’t have to be a machine learning scientist with millions of dollars to access an extraordinarily powerful natural language model. You just need to be an innovative tinkerer with access to OpenAI’s API.
Smaller problems — and boring problems and problems faced by people other than the well-heeled — are suddenly feasible and attractive targets from an economic point of view.
The consequences of this are enormous. We’ll see more developers creating more applications, collectively enriching our legal ecosystem with new ideas and new ways of solving legal problems. We’ll see better solutions, and hopefully those better solutions will be available to more people.
In the long run, we predict that technology like GPT-3 and its successors will change how the legal system creates and consumes texts, including what is created and consumed and by whom. With powerful summarizing and translation technology, how will contracts change form? Will legislation continue to be written in prose, or could it be codified in new ways to be understood by computers as well as humans (as Jason Morris has suggested)?
One thing is fairly certain: the law will change. The modes of transmission of the law has always affected its content. We are beginning another chapter in the history described by the late Dr. Peter M. Tiersma in his book Parchment Paper Pixels. The next item on that list could be ‘Parameters’.
In our own corner of the ecosystem, products in the legal research space will make it easier for people interacting with the legal system to understand and write about the law. This is a democratization. Powerful natural language models allow users to find relevant information very quickly and with high precision, even in dense legal text, and provide useful recommendations of what language to use when drafting, for example.
But there is definitely still a need for a knowledgeable human in the loop. GPT-3 can be very wrong sometimes; and in law, being wrong can have dire, liberty-losing consequences. GPT-3 also isn’t very good at differentiating between what’s unknowable and what is just unknown. Also, using these models means that you’re building on language that has come before. GPT-3 doesn’t reason well, and it doesn’t think morally. And what’s more, it is not accountable for the text it provides. For these reasons (and others), competent advocacy is still beyond its capabilities.”
For thoughts on what GPT-3 means in terms of translating from natural language to a formal programming language and therefore have potential in the rules as code space see: https://medium.com/@jason_90344/computational-law-diary-what-does-gpt-3-mean-for-rules-as-code-d2f01caa6857