Guest post: Moving fast but responsibly – A briefing from NetDocuments on the promise and pitfalls of generative AI

By Dan Hauck and John Motz

Almost every day now, we have customers reach out to ask our team at NetDocuments the same question: Is ChatGPT and similar technology hype…or is it real?

Both IT and legal professionals are closely monitoring the headlines about generative AI. Many have seen the LexisNexis survey where nearly half of lawyers agreed that “their clients will expect them to use cutting-edge technology, including generative AI”. Or they’ve read the Goldman Sachs report that named the legal industry as the second most likely to be transformed by generative AI.

Yet many in our industry have also seen this movie before: a new legal AI company or technology emerges with a splashy demo, a breathless press release, and promises of a new era of law practice. Flash forward a year or two, the hype has fizzled and the practice of law remains largely unchanged.

So, which is it this time: hype or reality?

Let’s be clear: generative AI is real.

Unlike many “legal AI” technologies of the past, which have failed to live up to expectations, this new breed of large language models (“LLMs”) is based on a true breakthrough in machine learning techniques. Companies like OpenAI and Microsoft have then used specialized supercomputers to apply these novel techniques to data sets encompassing much of the public internet.

The results are capabilities that outperform prior state-of-the-art AI across a wide range of legal tasks spanning from textual analysis to drafting. Based on our own internal testing and development, we believe LLMs like OpenAI’s GPT series can address the fundamental challenge that has frustrated the legal industry for decades: turning the copious amounts of unstructured data embedded in all manner of legal documents into categorical knowledge.

This knowledge becomes actionable intelligence that drives a flywheel of quality and efficiency. As this new reality sets in, the next question legal organizations need to ask themselves is how they can use generative AI to accelerate their businesses today and futureproof them for tomorrow.

A winning strategy: Unlocking your data and documents with generative AI

Just because generative AI is real does not mean every solution that incorporates it is right for your use cases or your business. When assessing generative AI products, firms and general counsel’s office must make a clear-eyed assessment of what value they expect to gain from adoption and weigh that against the potential downsides.

You’ve likely seen the demos – or experimented yourself – where providing a prompt like “draft a contract for a commercial lease in New York” generates a passable version of the desired document. But when real experts look closely at those drafts, they often identify important omissions or even outright errors.

This gets to the heart of both the power and pitfalls of current LLMs: the results are only as good as the prompt the user provides. Frankly, many of the products being launched today are thin layers on top of OpenAI’s GPT models. Without the ability to design prompts intelligently and incorporate crucial context, firms will struggle to produce accurate and valuable results.

Based on our own testing, the most valuable generative AI use cases arise from leveraging a firm’s corpus of documents and associated data – whether to search that corpus, classify documents, analyze, and extract data from its contents, or generate new drafts based on prior precedent. When this critical context can be embedded into a prompt, it supplies the otherwise missing guardrails required to unlock the LLM’s best responses. Equally important, these AI-generated responses are informed by the prior efforts and experience of the firm, enabling organizations to extend their proprietary knowledge into future work.

Innovating on solid foundations

Of course, this raises critical security and privacy considerations. Is it safe to share the context of your sensitive documents and data – or your clients’ data – to take advantage of these capabilities? Will that data be retained, and for how long? Will it be used to train the LLM, dissipating your unique advantages to competitors and compromising confidentiality?

Innovation must always be paired with a laser-focus on security, access controls, and data privacy. These values must begin with the content platform itself and predictably inform any AI capabilities introduced. When we started work on generative AI last year, we built on two pillars our users could trust: our platform’s own content security technologies and Microsoft Azure’s enterprise grade OpenAI models. This foundation marries the power of bleeding-edge models like GPT-4 with mature data governance applications.

This means that customers remain in control of what content is available to build prompts; who has access to the apps that use OpenAI and how those apps are built; and how firm and client data is protected from incorporation into LLMs. In other words, you must consider the entire workflow of the users and data and how it can be managed every step of the way.

As one CIO commented after a recent discussion with our team, the right approach combines continuous innovation with a true commitment to security and reliability. As the pace of change accelerates, we believe firms looking to seize the opportunity of generative AI would do well to keep those words in mind.

About the Authors

Dan Hauck is chief product officer at NetDocuments. An ILTA award-winning product leader and former practicing lawyer, Dan leads the development of vision, planning and execution of product strategy across all areas at NetDocuments. John Motz is chief technology officer at NetDocuments. Motz is a technology-savvy leader with over 25 years of experience in developing and implementing Enterprise SaaS software.

Legal IT Insider does not charge for editorial comment pieces, which appear here on merit. If you would like to submit an article for consideration, please contact