AI Revolution – Gen AI v TAR 1.0 – Is There a Difference?
By: Jeff Johnson, Chief Innovation Officer
TAR – More Process than Technology
AI Washing is likely a phrase you’ve heard at least once in recent months. It refers to exaggerations or false claims around the use of AI in products or services.
I didn’t hear the term in the 2000’s but you could say that “TAR Washing” was common then. Some offered to provide TAR solutions by claiming the “T” in Technology Assisted Review could include technology like concept clusters, near duplicate detection, email threading etc. This was a stretch.
Technology Assisted Review, or TAR, means something more specific than the generic definition of those three words. Yes, many technologies can assist a review. That does not make them TAR.
From the start, TAR signified specific processes that harnessed people and technology to “classify” documents in one of two buckets (most commonly responsive or not responsive).
It is appropriate to view our most common TAR workflows as substantially technology-agnostic.
Gen AI is TAR (or at least can be)
For the same reason utilizing email threading is not TAR, utilizing Gen AI IS TAR (or at least can be).
As noted above, process is the more fundamental aspect of TAR, as opposed to any specific technology or algorithm. The evolution of TAR disclosures, so that they focus on process and validation steps far more than technology selection, supports this assertion.
As tools applying Gen AI to eDiscovery review hit the market, we have to maintain the critical “human in the loop” in two key ways:
- Human input, in the form of, review protocol documentation or prompting
- Qualified validation that fulfills our ethical obligation to supervise
The common best practice recommendation for item 1 is the application of the Gen AI solution to a sample of a few hundred documents for which correct human review decisions exist, comparison of AI suggestions to those decisions, and iterative updates to the Gen AI prompting until it performs well on the sample. This prompt development step in a Gen AI TAR workflow occupies the same position as model training in a TAR workflow based on Machine Learning AI.
As for item 2, the same processes we’ve developed to validate a TAR review are likely the best method for this too.
You may have already predicted where I’m headed with this but I do enjoy saying it explicitly…
The most commonly documented/recommended way to use GenAI tools in automating eDiscovery review today is essentially TAR 1.0.
To be clear, I’m not suggesting that is a bad thing or that there is nothing new here. Not at all. I take three points from it:
- All the same concerns we have with TAR 1.0 production without eyes on documents, etc have to be dealt with in a Gen AI-automated review
- Perhaps the integration of yet another new technology will help us realize that a TAR 1.0 workflow enabled by 2024 technology is far better than a TAR 1.0 workflow based on 2004 state of the art…. And sometimes it is simply the right workflow to use.
- As this realization gains traction, it has the potential to increase a willingness to try Gen AI automated review, reduce the need for new precedent, and ultimately speed adoption of this new technology (we’ll save the discussion about cost for another day 😊)
The Bottom line
The use of Gen AI tools in automating eDiscovery review represents an evolution (NOT a replacement) of a traditional TAR process utilizing a dramatically different technology governed by the same structured process and validation steps. Understanding this concept should help us (a) better understand how to develop the process (b) accelerate the adoption of this new technology and (c) position ourselves to take the best advantage of Gen AI’s growing capabilities in the eDiscovery process.