AI rEvolution – AI-based First Pass Review – Servant Becomes Master
By: Jeff Johnson, Chief Innovation Officer
In my last post, I suggested a best practices GenAI first pass review process is a TAR 1.0 workflow.
I stand by that. In broad strokes, they are the same.
That said, there is a difference, with pros and cons, in the “model training” portion of the workflow.
Machines have Turned the Tables on us
Our AI overlords are using eDiscovery review to exert their dominance. Ok, that’s hyperbolic. All in fun. Here’s what I mean.
In a traditional TAR 1.0 workflow, expert attorneys spend hours (very likely days) “training the model.” By learning what the human experts think about some documents, machine learning AI learns what to think about other documents. The tools designed around this AI help the process along by selecting the documents for which attorney decisions will provide the most guidance to the AI’s learning.
With the new GenAI enabled tools, the days of AI caring (or at least appearing to) what we think about documents have passed. Our new AI models already know what they think. If we don’t like the answer they give us, we have no choice but to change the question.
In a Gen AI workflow, expert attorneys craft a review protocol (much as they would for a human review team) and provide that protocol to the AI reviewer, let’s call her Clarice, along with some sample documents to review.
If Clarice provides any “wrong” answers, the experts then go back to the protocol drawing board attempting to make adjustments that will elicit more “correct” responses from Clarice.
That process of selecting test documents, checking the results Clarice generates, adjusting the protocol, and re-checking results continues until the experts have learned how to ask questions that Clarice answers (often enough) the way they want her to.
Think Like a Machine – Data In, Logic Out
From a workflow execution perspective, this version of the process can seem quite nebulous (and therefore uncomfortable). As follows:
- First, Clarice is not helping us identify the best, most representative, or most challenging documents to use in prompt development. The humans in the middle of this process cannot simply sit down and review the documents that Clarice tells them too.
- Second, Clarice isn’t providing direct prompt adjustments. Humans are responsible for flagging the responses they don’t like, analyzing the subject documents, and crafting the appropriate prompt/protocol changes. Clarice (generally) does provide decision rationale that helps.
Neither of these changes require skills that attorneys don’t generally have and the GenAI enabled review tools available today already do a great deal of the “Thinking Like a Machine” for us. Even so, it’s like starting a new exercise routine and feeling that burn that comes from using muscles you haven’t recently used in that way. Which brings me to the workflow advantage I want to point out here….
As we adapt to the new process, Clarice will train us how best to work with her. We will select sample testing documents more efficiently and write our prompts more effectively. As such, there is great potential for this portion of the process to move much more quickly and less expensively. Feel the burn!
The Bottom Line
GenAI powered first pass review involves a workflow that parallels TAR 1.0. That does not mean the execution steps are the same. Specifically, the steps necessary to elicit the generation of “correct” responses from the AI are different and involve a different skill set. In its current evolution this process can initially feel a little like guesswork. Don’t let that scare you. Solid validation processes are always available, adaptation is possible and there could be time/$ advantages that make it worthwhile.