Clifford Chance’s early Generative AI and Copilot journey – The inside view

When Clifford Chance announced in February this year that it was going to become one of the first organisations to roll out Copilot for Microsoft 365 globally, the legal tech market’s collective jaw dropped. Was this an uncharacteristically knee-jerk reaction? And how on earth did they get budget? As with a lot of overnight decisions, it was years in the making, say the IT and risk heads that led the way.  

In February this year, Clifford Chance became one of the first organisations – not just law firms – to decide to deploy Microsoft Copilot and Viva Suite globally; a huge investment that felt to some far too quick, as the industry struggled to get to grips with not just the governance of Gen AI but also the cost implications. That struggle is very much ongoing to this day. 

Law firms are, generally speaking, not first movers in tech. We leave that to the financial institutions and big banks that invest billions in tech and innovation. But here Clifford Chance was being feted by Microsoft for “taking its digital transformation journey to the next level, as one of the first organisations globally to deploy both Copilot for Microsoft 365 and Viva Suite at scale across its entire workforce.” Both Microsoft and Clifford Chance itself pointed out that this February 2024 announcement followed the deployment in 2023 of Clifford Chance Assist, developed on Microsoft’s Azure OpenAI platform, which was rolled out to the entire firm after trials with over 1800 users. 

So how did the firm get here so fast? What was the process and budgetary calculation behind the decision? And, as many have wondered over the past few months, how have they put guardrails in place? We asked chief information officer Paul Greenwood, chief risk and compliance officer Bahare Heywood, and director of legal technology and innovation Anthony Vigneron to give us a fly on the wall version of events, as the legal industry continues to look to one another for essential learnings in this fast-developing area. 

Policy, Process, and Risk Assessment 

It’s key to note at the outset that Clifford Chance was one of the first law firms to have a data scientist team and had been doing research on Transformers (the T in GPT) since 2018/2019, publishing its own research in 2019 1912.06905 (arxiv.org), so it had built the digital foundations required to make the most of Gen AI when it came out. But the speed of change when OpenAI launched ChatGPT 3.5 took everyone by surprise. 

Greenwood describes being in a meeting with chief risk and compliance officer Bahare Heywood, who referred multiple times in a meeting to GPT. “We asked ChatGPT ‘What are the top risks for law firms?’ The results blew my mind,” Greenwood said. “Then we asked it to translate that into French, and then asked it to do a speech in the style of Macron. We both started screaming, I was shocked by the speed and accuracy, and I gather the French was incredible.

 “We were excited from an early stage and shared the view that this was important technology that people needed to understand. You’ve seen a lot of law firms block generative AI technology; our practice was not to do that, but to give out guidance not to use any client information but to experiment, with risk and IT working closely together behind the scenes.” 

Greenwood refers to Amara’s Law, which says that people tend to overestimate the effect of technology in the short term and underestimate the effect in the long term, commenting: “We wanted to take a long-term view, and we pulled together a steering group to outline our position to clients. Our view was not to rush anything out. One of the first things we decided before saying anything to the outside world was that we needed a set of generative AI principles, and a policy agreed across the firm.” 

As any firm that has gone through or is going through this process knows, this is easier said than done. Heywood recalls: “This felt a little painful and slow at first. It took a lot of 3am finishes, looking and looking again at the policy. We wanted to make sure it was sustainable, and we recognised the need for guiding principles. We felt strongly from an ethical perspective the need to make our position clear on the use of AI. From my perspective, it was not just about policy but having a clear process around risk assessment.” 

Heywood’s objective was to provide the IT and innovation team with the means to move forward on Gen AI projects without having to revert to the risk team in every instance, which would slow innovation down at a time when organisations such as Microsoft and OpenAI were (and still are) moving at lightning speed.  

“We could have just thrown out a policy overnight but what we ended up with was very different to what we started with,” Heywood says. “It was trial and error and focused on ease of use and people not drowning in jargon. We needed a system that was easy to understand, so we brought in a traffic light system that talks about tools and types of data. 

“If it’s green, knock yourself out. We wanted to encourage curiosity, because if you don’t use and play with something, you’ll never understand it. Then amber and red. Green is certain types of tools and data that we say is fine because of where the data sits, taking into account the data, tool and client. Red might be reviewing CVs or personal information.” 

The policy was being created while the EU AI Act was still being negotiated and takes into account internal advice on the direction and content of the Act. Vigneron observed: “It was important to do risk assessments and categorisations of uses cases early and the direction of the Act was reflected in our policy. While other firms talked about doing POCs, they hadn’t done the work yet, and if you haven’t done the work to agree acceptable use cases, you will find it hard to progress the POC. 

“We wanted to be sure we understood what was happening with the data. It’s different with public ChatGPT and private Azure. We said that there are tools where you can put in sensitive data but many were that is not the case.” 

The team engaged with the firm’s clients on the policy and in the early stages many were worried about their data ending up as training data for the model. “They said a blanket ‘no’ to generative AI,” Vigneron recalls. 

That position has now been reversed and Heywood adds: “It’s about having those grown-up conversations. Where there is a disconnect, you can usually resolve it.” 

Personal Responsibility and Training 

Having a policy that you have spent significant time on is one thing, but ensuring it is adhered to is another thing altogether.  

Greenwood reflects: “Bahare has pushed the message of personal responsibility. You have a personal responsibility to check the policy and output, it’s how we run everything.” 

Lawyers have ethical and professional obligations to their clients that are core to everything they do and that is no different when it comes to Gen AI. But with this new type of generative technology, it is essential that they don’t make mistakes because they don’t understand how it works. 

Heywood said: “Our responsibility is to ensure that the policy and guidance is clear and that when you go into a tool it’s clear what the restrictions are; you don’t have to go back to the policy. When you open Teams, it will tell people ‘Don’t use it for x,y,and z, as those things are restricted.’” 

Vigneron added: “The other thing is that we have mandatory training on the policy.” 

For a firm the size of Clifford Chance, this has been an enormous undertaking. The team has also been working on training supervisors on how the technology works. Heywood says: “It’s the responsibility of the supervising partner to understand where it is used: if you don’t understand it, you can’t use it. The policy says that if you use generative AI for client work, you have to tell your supervisor.” Crucially, supervisors need to understand that this type of tool may hallucinate. 

Selecting Copilot

According to Greenwood, one of the benefits of rolling out Copilot to everyone in the firm is that there is no reason for them to use other tools, ergo its selection saw its genesis in the early days of risk analysis.  

Greenwood says that Clifford Chance’s decision-making process around the selection of Gen AI tools – including its GPT-4 tool Assist and property technology provider Orbital Witness – come down to cost, data privacy, enterprise readiness, and usefulness.  

The first question, Greenwood says, is ‘Do we need it?’ “And then we look at long-term sustainability. Startups get bought. In selecting Copilot, we decided that we were going to align ourselves with the Microsoft roadmap. Microsoft has a big stake in OpenAI.” 

Another big part of the firm’s thinking was and is that most legal tech vendors would align themselves with Copilot rather than delivering standalone products. That, in many cases, has already come to pass. 

Heywood said: “What sold me on Copilot is that Paul said he was going to make me my own version. We have hundreds of policies and finding the right one can take two days but with Copilot in Teams, you just ask for the policy you’re interested in, and it will send you a summary, where it’s quoted from, and it gives me advice about it. You’ll want and need to check it, but it returns all that information within four seconds.” 

Use Cases 

The use cases for Copilot are continually evolving but Vigneron says: “The big use cases are summarisation, where for example, people are in a big email chain and want to summarise it to give a report to the client – all of the output will obviously need to be checked. Or the second big one is finding the needle in the haystack. If you did a bond issue for a client and want to find out how much it was for, Copilot can search for anything in SharePoint and email. The third one is originating information – which of the banks in this syndicate have responded? You could organise witnesses into a structure. The fourth is creativity, where you write a document and ask Copilot what is wrong with the argument and to find the weakness.” 

The other shift that Copilot will usher in is better self-service possibilities, whether that be providing staff who would have used the Clifford Chance Service Now service desk with self-service options, or enabling staff to become more self-sufficient when it comes to comes to compliance and policies.  

Vigneron and his team have been monitoring feedback on use cases across various practice groups: any issues or risks have been escalated to a working group for each practice group, with that in turn fed into an overarching working group.  

ROI 

When Legal IT Insider first wrote about Clifford Chance selecting Copilot globally, the ‘back of a napkin’ sums we used to calculate the cost was this: if the firm has paid the $30 per user per month for its circa 7,000+ employees, it is adding $2.7m a year to their annual tech bill. No doubt this is probably worse than back of a napkin math, but you get the point. It’s a lot.  

So its befitting to conclude with the topic that people have spent most time discussing: what was the thinking behind the cost of the investment and what is the ROI? 

Vigneron says: “The difference we see in the market is that if you burn your investment firepower on something that isn’t comprehensive you will have to keep going back to your CFO for another conversation. With Copilot if we save 15 minutes per lawyer per week it will pay for itself. And we are saving more than that.” 

Caroline@legaltechnology.com 

Here is a helpful summary of Clifford Chance’s AI principles and guidance: 

#1 Be AI ready 

  • Establish AI principles & policies first  
  • Establish the right technical foundation (tech platform, data strategy) 
  • Get your governance in place with diverse teams 

 #2 AI in the firm 

  • Choose an AI ambition that is aligned to your policies and identify what are your priorities & challenges, and where AI might be able to help 
  • Prioritise for the best ROI 
  • Give users time to learn & explore early the new human/machine relationship with AI as the new user interface: it’s important to give time and space to people to learn, with  
    • some learning about principles, policies, guidance, mandatory eLearning 
    • learning about generative AI as a topic (ethics, risks & benefits, how to interact with AI (prompt engineer for precision & relevance), managing limitations, learning how to speak about it, how to supervise teams who use AI, etc.
    • learning how to make the most of the AI tools provided, and sharing use cases with colleagues, exploring together how to find suitable use cases in the jagged frontier of AI and importantly keep learning as it’s evolving so fast) 

#3 Finding AI Opportunities 

  • Learning how to find use cases  
  • AI first: to do your tasks or solve problems make it a habit to use it regularly (for example AI tools as your homepage, using shortcuts, keep a reminder on a post-it, add to your team meeting agenda, add a daily reminder on your calendar, etc.)
  • Identify everyday AI opportunities you can deploy at scale now 
  • Identify two-to-three game changing AI opportunities for each function