The BBC reported this week that Hill Dickinson has blocked general access to several public AI tools after it found a “significant increase in usage” by its staff, in an article that appears to be critical of the leading UK law firm without substantiating why.
The BBC says that Hill Dickinson’s chief technology officer warned staff of the use of AI tools, observing that much of the usage was not in line with its AI policy, and said that going forward the firm would only allow staff to access the tools via a request process – aka they have not been banned.
According to the email, the BBC reports, the firm detected more than 32,000 “hits” to ChatGPT over a seven-day period in January and February. During the same timeframe, there were also more than 3,000 “hits” to the Chinese AI service DeepSeek. We understand that these were records of inputted prompts, where multiple prompts are likely to have happened in a single session.
The BBC then slightly randomly quotes a spokesperson from the Information Commissioner’s Office saying that firms should not discourage the use of AI in work.
The use of consumer GenAI tools is a matter of ongoing complexity for law firms that involves the risk of staff sharing confidential client data. While focusing on AI literacy can help alleviate problems, many law firms and corporations have outright banned the use of DeepSeek over privacy concerns. The approach taken to ChatGPT still strongly varies between law firms, however many firms are strongly leaning towards the use of enterprise GenAI tools, where they can be sure that client data is protected. The challenge presented by the likes of public ChatGPT is that it is hard to monitor and incredibly easy to use. The downside of restricting it, however, is that it can encourage use on private devices.
Hill Dickinsons said in a statement: “Like many law firms, we are aiming to positively embrace the use of AI tools to enhance our capabilities while always ensuring safe and proper use by our people and for our clients. AI can have many benefits for how we work, but we are mindful of the risks it carries and must ensure there is human oversight throughout.
“Last week, we sent an update to our colleagues regarding our AI policy, which was launched in September 2024. This policy does not discourage the use of AI, but simply ensures that our colleagues use such tools safely and responsibly – including having an approved case for using AI platforms, prohibiting the uploading of client information and validating the accuracy of responses provided by large language models.
“We are confident that, in line with this policy and the additional training and tools we are providing around AI, its usage will remain safe, secure and effective.”