Search in the age of AI: Insights from industry leaders at ILTACON25

“Everything starts with search. It used to be about finding a needle in a haystack. You want to find a precedent, something you’ve worked on, but it can be tough with Boolean search. With the advent of AI, what you’re getting is a haystack full of needles. In order to do anything, you have to find something, but that something has to be the right thing.”

This was the spot on opening observation from Ilona Logvinova, director of practice innovation at Cleary Gottlieb Steen & Hamilton, in a packed panel session at ILTACON 2025 looking at the changing nature of search in the age of AI, including some myth busting, and a look ahead at how agentic AI will change the nature of knowledge management.

Chaired by Douglas Freeman, enterprise search manager at Simpson Thacher, the panel tracked the evolution of search, starting with a reflection back on the origins of enterprise search. “We realised we have the people, documents and matters and can combine them to create enterprise search: we have the metadata and we know you’re a tax partner in Palo Alto, so we could add context,” Oz Benamram, former chief knowledge and innovation officer at Simpson Thacher and now consulting law firms as a fractional CKIO, recalled.

Conceptual search is different and over the last 20 years has seen enormous progress. Pre the likes of ChatGPT the ability to find anything and everything on the internet has been amazing, observed panellist Yannic Kilcher, chief technology officer at DeepJudge. “The internet is a giant mess but a search engine makes it accessible. We have seen the advance of conceptual search away from pure words to the concept behind them and similar words and similar context. This hasn’t arrived in enterprise search – it was behind but we’re now seeing the first foray in firms.”

Humans build neural networks and learn what is similar and dissimilar. Statistical models of this are called vector embedding. For law firms, advances in search will mean less time structuring their data and more on what they do with the data. “We are just now starting to think of our raw material as data,” Logvinova said. “All those words are data points. When you start seeing it as that, you can do really interesting things.”

One misconception is that large language models themselves are search engines. “LLMs are not a search engine,” Benamram stressed. “To find the right document, you need a good search engine.”

Freeman reiterated that it is the search engine that finds the documents that you can then feed to a large language model. “You start with retrieval and then you get the results,” he said.

Kilcher, who has tens of thousands of followers on LinkedIn (and hundreds of thousands followers on YouTube https://youtube.com/yannickilcher), where he makes ‘cut the buzzword’ legal tech videos, said: “What most people don’t realise is that the likes of ChatGPT are not LLMs but a system – the part that goes to the internet is not an LLM. It needs to be paired with a search engine.

“The LLM reaches to a search engine to collect the data and process it. The search engine is good at giving you the most relevant information, and the LLM good at nuanced analysis.” This retrieval augmented generation (RAG) process enhances the output of LLMs: the search engine finds the most relevant data and puts it into context, with more and more focus going on supplying the right context. The future is agentic AI using tools to achieve a process, and a search engine is a tool, Kilcher said. But he added: “The secret is that agentic behaviour is something you can just buy now at the price of tokens and what no-one else has is your tools and knowledge – what you can supply to the LLM.”

The question on everyone’s mind is where search is going next and how AI will change the search experience.

Kilcher said: “The next iteration of LLMs are really good at reading and evaluate or change a query to find even more relevant things.”

Logvinova added: “We can now equip ourselves with agent systems where search is done for us. Not by people, but by other agents. All of us ought to be supervising agents in addition to our human colleagues.” She added: “We’re not limited to one search but can set up multiple simultanous searches that apply your expertise on top. This is the important shift. It’s not just about the tool, but a set of processes tied to the tool emerging.”

Until now we had technology for search and humans have then derived meaning and taken action. Benamram said: “Now agentic AI can do all four steps. You could create a loop in which no human is involved at all. You can monitor a client all night. If they get sued, the agent puts a calendar meeting in the diary with the relevant people, runs a conflict check, and drafts an engagement letter.

“The exciting thing is that you can be in control by connecting things and making them accessible to your lawyers and clients. Most law firm underestimate the change that is coming: the delta is not the 15% efficiency gain within the law firm, but rather that another 60% of the time, the phone isn’t going to ring. Clients will adopt AI instead of reaching out to their outside counsel, and the future of law firms depends on them being on them being able to perform proper KM to make their knowledge AI-consumable, so that the AI their client will use, will interact with the firm’s knowledge.”