ChatGPT sees another Australian lawyer referred to the regulator – is it time for mandatory training?

Should basic AI training be mandatory in order to practice law? Should all Courts have minimum AI guidelines? We should be doing better by now.

An Australian lawyer has in the past few days been referred to the New South Wales regulator after he filed submissions that contained ChatGPT-generated, non-existent citations and authorities. One striking part of his attempted mitigation was that AI had been promoted by reputable legal services such as LEAP and LexisNexis as being of assistance in legal practice.  

 

For those of us keeping up to speed, or at least trying to keep up to speed in this post-Gen AI world, this is quite funny. Maybe not so much to LexisNexis and Leap, who have definitely never condoned using public ChatGPT for legal work and have spent a ton of money developing and promoting their own Gen AI offerings. One of many key differences is that they leverage retrieval augmented generation, providing links to legal sources within their answers, with Leap also offering a human-in-the-loop review. 

That is not to say that those systems don’t hallucinate – the last Australian lawyer referred to the regulator over filing fabricated caselaw had used Leap. He did avail himself of the human in the loop service but filed his brief without reading it. With respect, there’s no cure for stupid. 

It’s also not to say that public ChatGPT isn’t pretty good at legal research. In our own side-by-side tests, it returned an accurate answer to the now famous trick question, ‘Why did Justice Ruth Bader Ginsburg dissent in Obergefell v Hodges’, (she did not dissent) when other legal GenAI systems fell at that hurdle. 

 

What this case arguably indicates is that the lack of understanding about AI among lawyers may grow not decrease as the sheer amount of AI systems increases. Secondly, the judgment in this case, which we’ll come onto, also suggests that Courts themselves need to simply do better in the way they communicate about the misuse of GenAI. 

To the first point, those who say that lawyers are responsible for keeping their own knowledge up to date and availing themselves of the facts, are, of course, right. It’s a fundamental cornerstone of practice. It’s what lawyers do. Just read about ChatGPT and don’t be a dumbass. 

But it doesn’t take a giant leap (no pun intended) to see how the AI lines get blurred for everyone but the specialists. With the advent this year of DeepSeek and the fact that the GenAI floodgates are now open, we need more practical debate on how lawyers should keep up to date with AI and what the minimum requirements to practice should be. 

There is a good argument to say that AI training ought to be compulsory in order for lawyers to continue to practice. Given that basic technology competence is not globally insisted on by regulators, you may fairly roll your eyes. But if there is a time to push for a competency requirement, surely it is now? 

To the second, it was surprising in 2025 to see the Court in this case return a fairly stock ‘generative AI systems are risky’ response. Sitting in the Federal Circuit and Family Court of Australia, Judge Skaros said: “While the Supreme Court of NSW has issued guidelines around the use of generative AI, other Courts, including this Court, are yet to develop their guidelines. The Court agrees with the Minister that the misuse of generative AI is likely to be of increasing concern and that there is a public interest in the OLSC being made aware of such conduct as it arises.” 

Without wishing to single out Judge Skaros herself, surely all Courts should have basic AI guidelines by now, even though those guidelines will evolve over time? If we’re in the business of pointing fingers at holes that need to be plugged in the legal profession’s knowledge sharing, this appears to be a fairly glaring one.

The pace of change over the past two years has been astonishing, and mistakes are inevitable in times of great change. But given what is coming down the track this year and beyond, at an industry level we now need to do better, faster.

[email protected]