by Ryan Reeves, Senior Vice President of US Operations, UnitedLex

While it may seem old hat to those who have been doing it for a while, e-discovery is a relatively young discipline. In many ways we’re still learning how to meet the legal requirements and challenges posed by discovery in the context of a digital world. Controlling long-term costs, particularly in the era of big data, adds a whole new layer of variability and uncertainty to an already complex environment.

Certainly, the Electronic Discovery Referenced Model (EDRM) has proven useful to GCs and other practitioners as a point of reference and a resource for maximizing efficiency and cost-effectiveness, offering a common language for vendors and practitioners, a coherent end-to-end framework composed of more or less sequential phases and some basic best practices for each phase of the process. But in the final analysis, EDRM is still only a model that presents a relatively static view of e-discovery processes, which in reality are much more iterative rather than linear.

What I’ve found through practice is that the EDRM framework doesn’t represent what really happens in real matters. In my view, litigation support has less to do with steps or phases in the e-discovery “lifecycle” than with something larger and more comprehensive. What I have in mind is data management, an underappreciated discipline that is becoming increasingly important in an era of exploding data volumes and a corresponding loss of control over costs.

The primary focus of EDRM is on what happens to data as it gets pushed through the various stages of the model. In many ways, this reflects many practitioners’ persistent focus on the short-term challenges of individual matters, particularly the technology and costs required to get from one phase to the next.
A more useful and enduring model, however, would turn the focus to long-term strategies for controlling data volumes and sound data management techniques that take a broader, multi-matter perspective.

Why commodity pricing doesn’t save you money
A big part of the problem lies in vendor responses to their clients’ concerns about price. Instead of assisting clients in managing their data intelligently, for many vendors the incentives to just process all of the customer’s unfiltered data straight through are too powerful, and they are all-too-willing to sweeten the deal for their customers with relatively low per-gigabyte prices.

Per-gigabyte pricing – which is essentially the cost of running data through software and ultimately storage – has become a commodity in the industry. That’s because lower unit pricing is enticing to the customer who is narrowly focused on cost-cutting for the matter at hand. Vendors offering the lowest prices know they can make up for low margins with higher data volumes. Their customers, in turn, respond favorably to unit pricing because they can compare prices among multiple vendors. For GCs, however, securing the lowest per-gigabyte price doesn’t begin to solve the persistent problem of exploding data volumes and the corresponding increases in the total cost of ownership (TCO) associated with e-discovery.

Corporations that fail to effectively manage and pare down their data before it is processed in response to discovery requests will eventually face exploding data volumes, over-retention and increased costs for both the IT and legal departments. Those increased data volumes become subject to massive litigation holds and general over-processing of non-responsive material. As a result, the GC inherits increased processing and hosting costs whenever a large e-discovery matter arises. Likewise, increasing volumes of documents are necessarily released into the already-expensive review process.

Recent research shows that 80% of documents processed and promoted to review using traditional linear or technology-assisted review turn out to be unresponsive because legal teams are not able to effectively interact with the data to validate search results and remove false hits prior to processing and review. Large data sets at this stage of the game contribute to budget bloat on the IT side to store and maintain the data, with corresponding inflation of costs on the legal side to process and review vast amounts of information.

From this perspective, unit-price-focused vendors are not providing much help for GCs who want to confront the fundamental challenge of e-discovery – managing the information held by large enterprises in the age of big data. Those vendors simply try to make the costs of e-discovery easier to swallow by lowering the unit price. Which takes us back to the EDRM: the reference model is only a model, and e-discovery is not simply a matter of pushing existing data through a series of linear “steps” or “stages.”

Sound data management: shifting to focus from the individual matter to the long term
Few companies can succeed by focusing exclusively on expenses without paying attention to revenues. In much the same way, GCs are unlikely to regain control over data volumes and exploding costs by realizing short-term savings (e.g., transactional rates) on individual matters. As I’ve said before, the problem of data volume needs to be addressed before the data is processed – and preferably before the data is required before a particular case or investigation – and the efficiency with which the data is managed is something that should be relevant throughout the entire e-discovery lifecycle, not just at one or two specific phases of the EDRM.

How does this translate into approaches to real-life e-discovery? First, proactive legal departments will develop strict protocols designed to ensure that only the data required to run everyday business is retained. At the same time, GCs will begin to take a new view of vendor relationships. They will be less concerned with short-term, matter-specific requirements. Instead of engaging in one-off transactions with vendors, they will seek strategic relationships with partners who can help them make substantial reductions in data volume before processing begins. This approach takes direct aim at the most significant cost areas associated with e-discovery: processing, review and hosting.

As part of the process, organizations will want to engage experienced, competent partners who understand how legal departments work and together undertake a thorough review of previous matters and associated costs, so they can target budget categories where spending is unpredictable, consistently over budget and essentially out of control. Identifying such areas will help the legal team establish priorities and specific goals when negotiating with vendors on future projects, and then regularly monitoring those same areas over a period of time will present additional opportunities to discover new efficiencies. The right partner should possess expertise not just in the latest technology – although this is certainly important – but also in the full array of legal issues related to discovery and data management.

GCs are unlikely to realize meaningful cost savings – and by “meaningful” I’m thinking the six-to-seven-figure range for large corporate legal departments – until they can begin culling non-responsive data from the left side of the EDRM. It’s really that simple. Commodity or unit pricing applied to this or that process may offer some short-term savings, but it doesn’t begin to address the fundamental problem of exploding data volumes, a problem that is magnified as you move “downstream” from the left to the right side of the EDRM. To regain control over costs and budgets, GCs will have to embrace pre-processing data management and data reduction, and it will probably require a new mindset to make that happen.