Salient Logo
success-stories-with-reveal-ask

GenAI success stories in eDiscovery

When considering our GenAI success stories in an eDiscovery context, this is a good juncture to revisit that the human is very much still in the loop with GenAI solutions. As with all tools, it is only as good as the craftsperson wielding it.

Aside from the specifics of the prompts you run, we have observed that the best results are obtained when Reveal Ask is fully integrated into the investigative mindset. By that we mean when it is used as a complementary tool to support the investigator or lawyer’s thought processes, not considered as a replacement.

The LLM doesn’t know, for example, how to conduct an investigation or build a defence case and even less so related to your particular instruction. You are likely to have discussed the key aspects of the matter with your clients and will have developed a strategy for exploring the various themes that go to make up the case.

GenAI success stories

Use Reveal Ask to accelerate the exploration of each of the themes and areas of consideration, rather than expecting great things by leaping straight to the $64,000 question. In a simple case you may strike gold but its more likely that the case is far more complex than that and requires careful construction and lateral thought. Stuff we humans are still better at!

So building granularity in your questioning should yield better results and also limit the instances of truncation of results. The more granular you are, the more you mitigate the risk of vital information being missing.

When reviewing the responses Reveal Ask provides, you will see the number of references (‘chunks’ of documents) and overall documents it has considered. Each reference is a section of the semantic index which relates to the query that has been derived from your prompt. In its current implementation, Reveal Ask has an imposed limit of 100 references before it moves on to the synthesis of the response. So when you see the number of references at 100, it is a clear indication that you may be missing some relevant content. 

The first thing to do is check the semantic similarity percentage to your question on the later references. If these are low, then it’s less likely that you’re missing much of substance, but if they are still quite high, it implies that there is a lot of rich content within your context window and the risk of missing valuable detail is larger. If this is the case, refine either your context window (by applying further queries or date ranges or other filters) or make your prompt more specific. Or both.

GenAI for eDiscovery: Success stories using Reveal Ask

We’ve been testing out RevealAsk with our clients to evaluate the potential for GenAI in their eDiscovery projects. Find out our successes, what we’ve learnt along the way and our views on where the technology will take us next.

1. Introduction, a reminder about how GenAI works in eDiscovery and a practical example about using Reveal Ask.

2. The art of prompt engineering and what we’ve learnt about the importance of getting the prompt ‘right’.

3. How to use Ask to accelerate an investigation: our learnings and strategies for success.

4. Getting the most from the integrated tool set – the power of Ask in combination with the wider Reveal toolset.

5. What works well, which use cases are best suited to the Ask capability and some observations about future enhancements