AI rEvolution – Testing Generative AI for eDiscovery Review – Planning 101
As we begin another trip around the sun, there are a few eDiscovery topics that will remain top of mind through all 12 months of 2025 (and beyond). The use of next-generation AI in the legal industry is certainly near the top of that list.
While AI in eDiscovery and specifically document review is a rather small portion of that, it is an area in which Generative AI has established a foothold and represents a unique opportunity to explore what the latest AI can do in our field.
As we engage with clients on this topic, the discussions start with some variation of: “We want to understand it and be ready to use it but don’t know where to start or how to plan for the process.”
The process itself is relatively straightforward, not overly time-consuming or expensive. It does, however, exemplify the saying “failing to plan is planning to fail.” Furthermore, that failure could come with a variety of very costly consequences.
The remainder of this brief post offers cheat codes to avoid those pitfalls.
Approvals – Get them
While this is common sense, it is more than that. The past two years of rapid AI evolution have led to a dramatic increase in regulation, policy, and contract language surrounding its use. Your organization, and/or your clients’ organizations, almost certainly has a team (or teams) responsible for assuring compliance. Track them down and get their blessing.
The time it takes to check this box will vary. You may want to identify a good project for your testing (see next topic) before initiating the process(es). You don’t want to work through it only to find that there is no good testing data at the end of that particular rainbow.
Identify a Testing Project
The key to success here is maximizing the utilization of work you are already doing (or have done).
Good testing data will need “gold standard” review decisions with which to compare the AI decisions. Beyond identifying data for which you have (or will have) review decisions, you will want to be sure the review process creates decisions comparable with AI decisions. These adjustments are quite easy to make at the onset of a review. On the other hand, they may be virtually impossible to make on a completed review. If you have tried to reconcile “family complete” review decisions with traditional TAR workflow testing, you will recognize this challenge.
Beyond the data and review decisions, consider access to the subject matter experts (SME) familiar with the review you want to test against. As part of the testing process, you will need to iteratively adjust the review protocol used by the AI (“AI Prompts”). Your ability to do this quickly and without a significant time investment likely relies on access to those SME.
Subject Matter Expert Time
Speaking of SME, how much of their time will you need? Not much really. That said, you must have it. You will need the SME to:
- AI Review Prompt Authoring – Assist in translating review protocols into AI-compatible prompts. Please note, this is essentially a style translation. Note: you will not be asking the SME to become “prompt engineers.” Your technology and service partners should provide all the assistance you need to adjust a standard review protocol for effective use by AI.
- AI Review Analysis – Analyze instances where the AI disagrees with human review decisions. The objective is to identify areas where the AI is consistently generating decisions inconsistent with the submitted protocol’s intent.
- AI Review Prompt Adjustments – Complete / Consult on adjustments, based on the results of AI Review Analysis
How Will You Measure Results?
Know how you will measure AI’s success going into the process. Do you need to run it against the entire population, or can a properly sized control set-based test meet your needs? This decision will drive both workflow design and costs. Resources (time and money) for a large full-population test review are generally hard to come by. Don’t let that discourage you! Intelligent workflow design and statistics can give you the necessary familiarity and confidence.
Conclusion
While testing Generative AI-enabled review tools is logistically straightforward, appropriate planning is critical to achieving the confidence you need and perhaps more importantly, avoiding potentially significant and costly consequences from planning failures. An intelligently designed workflow applied to properly selected data, implemented after obtaining the necessary approval(s), can safely deliver the data you seek.
Read all of our AI blogs here: www.purposelegal.io/embracingAI