The “managed” part included people management skills, as well as process and workflow skills to distribute work and develop feedback loops for guidance, correction, and adjustments.
Technology assisted review (TAR) helped automate this process, making it less expensive, faster, and more consistent, but the learning curve was still long and steep.
Artificial intelligence (AI) is fundamentally different. AI takes the process formerly required for managed review and turns it on its head. Now, the subject matter expert (SME) on the litigation team can directly leverage their knowledge of the case via the ESI/documents and get feedback from an AI tool.
There is still a learning curve for the attorney who “knows” the case facts and situation. They will still have to determine how to navigate the AI tools, but the curve is far shorter and less steep than the learning curve for figuring out how to leverage TAR with a team of reviewers.
So, what will happen to the managed review team? Where will those reviewing attorneys go? What about all the QC reviewers and review managers? Many of those lawyers will need to fundamentally change what they do for a living. Some will have to leave the legal space and learn how to commercially scale their sourdough-making skills. Others will find that some of their skills are portable... and still needed.
Two new skills required to properly utilize AI tools are: prompt engineering and sample set selection.
The litigation team SME will likely be someone who becomes good at creative and useful prompts that wring the information the team needs out of their documents when AI tools automate the application of large language models in the review. Watching the results change as new prompts are tested will be a valuable and, probably, satisfying part of the learning process. These SMEs will become better at prompts in the same way lawyers became better at key terms in document sets.
The technical process of sample set selection along with testing and applying those prompts may be a slightly different matter.
Sample sets for AI tools are different than the “random” samples from TAR and other review processes. Sample sets for AI need to have only one version of documents (not multiple duplicates) to reduce cost. They need to have samples of every type of document present in the database and, within these documents, they need to have samples of various communication and document construction styles. Are there embedded charts in the documents? Are there places people communicate that are non-standard – such as Slack, Teams, or cell phone apps?
Review managers who have spent years learning their way around eDiscovery platforms can serve as a “right hand” for litigation team SMEs when it comes to figuring out how to optimize the new AI tools available on these platforms.
QC reviewers and review managers both have a high-level view of data sets, which will help immensely in selecting and putting together the custom sample sets that AI requires. Running a “perfect” AI prompt across the wrong sample of documents can get you nowhere and, worse, can be an expensive waste of time.
Avalon is riding the front of the wave on AI adoption. We have been here to help optimize eDiscovery solutions for years, and we are focused on helping in the future. Let Avalon help you be the best you can be as we all move through this new disruption into the new discovery world that AI promise. Contact our experts today to learn more about AI for managed review.
NOTE: Martin Mayne created the image for this blog in ChatGPT.