Salient Logo
genai-for-ediscovery

GenAI in eDiscovery gets real

1. Introduction to GenAI in eDiscovery

Reveal’s GenAI for eDiscovery offering, Ask, has been on general release for several months now and having been actively involved with Beta testing, at Salient we have been introducing it to as many of our clients as possible, to allow customers to evaluate the potential for GenAI in their eDiscovery projects first hand.

As we’ve said in previous articles, whilst there are plenty of areas where a GenAI solution could be focused, we see the primary application of Reveal Ask as being in the early assessment of large evidence estates, to confirm the fact patterns as we know them and to uncover further insights; the other ‘unknown unknowns’, so to speak.  Essentially, the value being in accelerating the overall investigatory journey by focusing in on the relevant evidence at an early stage and thereby cutting the overall time and hence cost for the exercise.

So how has it fared thus far and what have we learned?

GenAI in eDiscovery: how it works

But first a quick recap of the process and a reminder of the three key terms from the world of GenAI that are worth revisiting:

  • Retrieval Augmented Generation (RAG) – essentially the process used by many GenAI solutions of using search to ensure that relevant content is considered when synthesizing a response;
  • Context – in conjunction with RAG, the results of search and filtering used to expose the most likely relevant content via the context window to the Large Language Model (LLM) for it to couple with its generic understanding of the topic being explored; and
  • Grounding – essentially the ‘prompt’ or question you ask of the LLM, which when coupled with the Context, is interpreted by the engine to focus the LLM’s generic understanding of the topic and apply it / make it relevant to the material to which you are exposing the LLM.

GenAI in eDiscovery: a practical example

The LLM (well, certainly in the case of Reveal Ask) is a closed system. It has been trained on extensive amounts of content and as such has what presents as an apparent ‘understanding’ of many subjects.  Whilst that’s not strictly true, it helps with the understanding of the GenAI technique. So let’s assume therefore, that it has a general understanding of competition law, including that sharing pricing across an industry to unfairly influence that market is illegal, for example.

Suppose that your organsiation is involved in investigating a competition law matter where price collusion has been alleged and want to establish whether your evidence collected from your client [Party A] contains anything that would confirm such collusion.

Perhaps the allegations concern certain time periods or implicate named competitive firms in the industry [Parties X, Y and Z].

Your process might therefore be to search the total evidence base and filter out anything outside of the alleged timeframe and then to apply keywords representing the implicated competitive firms. The resultant data set (your context) is therefore ‘richer’ as it contains material more likely than the rest of the estate to be relevant to the matter.

You then wish to use the LLM to explore the supplied context and do so by providing a specific prompt to ground the LLM to apply its generic understanding specifically to your context and need.   Perhaps a prompt such as:

“Are there any examples where Party A has exchanged information with Parties X, Y or Z relating to the pricing of their products?”

The LLM can then apply its understanding of price fixing and collusion to the supplied context, specifically as it applies to communications between Party A and Parties X, Y and Z.

Finally, it will synthesise its response and reference the evidence that it has used to construct its answer.

To be clear, on top of any initial filtering of results by date range or keyword searching, the Ask tool will perform further semantic searches based on the question you ask of the engine, as we explore later in the series.

GenAI for eDiscovery: Success stories using Reveal Ask

We’ve been testing out RevealAsk with our clients to evaluate the potential for GenAI in their eDiscovery projects. Find out our successes, what we’ve learnt along the way and our views on where the technology will take us next.

1. Introduction, a reminder about how GenAI works in eDiscovery and a practical example about using Reveal Ask.

2. The art of prompt engineering and what we’ve learnt about the importance of getting the prompt ‘right’.

3. How to use Ask to accelerate an investigation: our learnings and strategies for success.

4. Getting the most from the integrated tool set – the power of Ask in combination with the wider Reveal toolset.

5. What works well, which use cases are best suited to the Ask capability and some observations about future enhancements