Benefits and common pitfalls of Early Data Assessment (EDA)

Early Data Assessment (EDA) has taken on an increasingly critical role as electronic data volumes continue to grow. Without effective EDA, the risk of over- or under-collection can be considerable, as can the escalation of eDiscovery overheads.

Often confused with Early Case Assessment (which investigates the broader merits of the case at hand), EDA focusses specifically on the data. Its primary goal is to pinpoint the location and nature of potentially relevant data sources, their original context and identify any governance policies – particularly around retention or disposition – that may apply. This helps determine what data to preserve as well as what needs to be done to preserve it, limiting the risk of intentional and/or inadvertent spoliation (and associated sanctions).

Equally important, however, is EDA’s ability to safely reduce large datasets for more efficient collection, processing and review. It does this by:

  • Identifying and prioritising custodial and non-custodial data sources
  • Excluding files with no relevance (e.g. emails to large distribution lists, system files, logos and other repetitive images, marketing emails etc.)
  • Applying date range filters or identifying gaps in the “data timeline”
  • Building a picture of key themes and concepts to be developed during collection and processing.
  • Analysing communications to prioritise or even identify further key persons of interest as well as topics for prioritisation.

Done right, this makes for a much more streamlined – and therefore faster and less expensive – eDiscovery process.

But EDA isn’t without a few potential pitfalls. These are some of the most common challenges and mistakes you may encounter along the way:

  • Accidental tip-offs: The process of identifying relevant data may clue involved parties in on the investigation. For example, accessing a user’s email messages without taking appropriate precautions could trigger read receipts and inadvertently tip your hand.
  • Corrupted metadata: Metadata has considerable value but is fragile and can be easily destroyed without careful and systematic protections in place. Something as simple as opening a file to review its contents, copying a file to another location, or even running a virus scan can alter valuable metadata. Knowing which metadata fields are most important to your investigation can be very helpful.
  • Missed opportunities for preservation: Late identification of automatic deletion policies can let potentially valuable data slip permanently out of reach. This is particularly risky for data/data types only identified further along in the EDA process. It’s generally better to cast your net wide and over-preserve, initially, than to play it “safe” and under-preserve only to draw a blank later on.
  • Overlooked data types: New communication and file sharing channels appear all the time – even in the most tightly controlled corporate environments. These emerging ESI sources (e.g. Twist, Flock, Dropbox, Google Cloud) are seldom obvious (often residing in Shadow IT – unsanctioned corporate applications) but their less formal nature can make them treasure troves of potentially relevant information. The onus is on investigators to perform a thorough enquiry into how custodians and organisations are engaging with emerging data sources, in order to leave no stone unturned.

Making the most of EDA’s potential benefits while avoiding these pitfalls (and others like them) takes careful preparation and strategic execution. In this, an experienced partner well-versed in the latest tools, techniques and best practices can be well worth the investment. Get in touch to find out more.