Striving For the Ultimate Goal
In previous articles in the Discovery Risk Mitigation series, we’ve firstly outlined the overall journey, then explored how to assess your current maturity from an eDiscovery and Information Governance perspective, and most recently, the parallel paths to remediate the situation.
In this next blog, we outline what should be the ultimate goal of any Discovery Risk Mitigation programme, namely, to try to prevent the occurrence of activity that could prove damaging and costly to the reputation and balance sheet of the business.
To re-iterate the central tenet of this series, prevention is better than cure. But that remains easy to say and less easy to achieve. So whilst we’ve outlined a generic approach that takes organisations on a logical path, the specifics will differ company by company.
Information Governance is Key
But I’d suggest that one point is incontrovertible. By having sound Information Governance practices in place, you’re taking excellent steps towards reducing risk by:
- Classifying content, either at creation or shortly afterwards, to assist with downstream decision making on lifecycle and security implications
- Managing access to and security of the classified data to limit the chance that it is inappropriately exposed or avoidably subjected to possible data leakage scenarios
- Implementing retention policies and only keeping what you are either obliged to for compliance reasons or you know has asset or knowledge value, and then ensuring disposition is robustly applied. As mentioned in previous posts, the inherent value of a knowledge asset generally decays with time
- Creating a data map of your information estate and ideally the lineage of the assets therein, so you know what you’re retaining, why and where to find it should you need to respond in a timely fashion
You are fundamentally better placed to manage risk by doing so, rather than relying on an anarchic data management regime.
Discovery Risk Mitigation
But beyond this, what other steps can you proactively take to mitigate risk?
By supervision, I am referring to implementing policies for monitoring data to either confirm compliance with corporate regulations or for spotting potentially fraudulent activity. And as a result, being able to take corrective actions before the misdemeanours become dangerous to the business. This is a powerful tool in the preventative arsenal. Moreover, just by communicating the presence of such a policy, it can have a positive impact on dissuading any individual from even contemplating any wrongdoing.
Whilst a policy of blanket monitoring of data may sound like the obvious answer, in reality, it is neither practical nor, in a world now dominated by data privacy regulations and legislation, justifiable. In terms of practicality, given the sheer volumes of communication data produced on a daily basis from email, messaging, and telephony sources, the challenges of monitoring everything would be enormous. Remember that to date, all leading eDiscovery vendors provide ‘lift and shift’ solutions; by that, I mean you have to move the data to the platform to analyse and review it. Doing so with huge daily volumes would be both costly and time-consuming. Therefore, adopting a sampling approach to ingest and review a sub-set of the data would be more achievable. How you sample should be defensible and justifiable. By that, I mean either randomised or at least without any demonstrable policy of monitoring any particular individual or individuals that they could subsequently claim is unfair or constitutes harassment.
Alternatively, a more focused approach might be to target specific events or functions (not individuals!), as long as you can justify the validity of such a monitoring strategy. A classic example would be when a tendering process is underway, when it would be justifiable to monitor the procurement team to ensure no unfair or beneficial treatment is given to any of the tendering parties, nor inappropriate incentives suggested to that team by third parties. Again, the event-specific communications could be ingested into the eDiscovery solution to ensure standards of propriety, probity and integrity. In certain jurisdictions, there are even legal obligations to demonstrate that steps have been implemented to counter potential fraud.
To summarise, the essentials of any supervision policy should be firstly to ensure full communication and transparency with the workforce and secondly to involve your legal team, to check the legality and appropriateness of what is being proposed.
Internal Threat Detection
Earlier on in the series, I outlined what I see as the parallel remediation paths of eDiscovery and Information Governance and suggested the panacea could be best achieved when they converge. Modern cloud-based platforms such as Microsoft 365 are increasingly extending their functionality to include automated classification, labelling and security capabilities to assist with the information governance path, but perhaps the most interesting area of development is the introduction of automated threat detection algorithms and augmented security processes, such as Microsoft’s Internal Threat Detection and Cloud App Security. As an example, these might flag for inspection an email (or indeed attached document) that includes more than a certain number of email addresses within its body or extensive downloading of content, both of which could indicate an unauthorised leakage of corporate content.
As mentioned previously, the current leading eDiscovery technologies are not ‘manage’ or rather ‘investigate in place’ solutions, and require you to move content for deeper investigation to the eDiscovery platform. Microsoft, for one, appears to be pursuing an Information Governance and eDiscovery convergence strategy and their integrated Microsoft Advanced eDiscovery solution appears to be on a journey towards manage in place. However, at present, there remains plentiful opportunity for full investigative and review technologies to take indicators from Cloud environments and other content sources to perform the deeper dive analysis.
Having used the native Cloud-based tooling to spot leading indicators, one best practice approach would be to cast the net around that suspicious article or activity to lift a manageable data set to the eDiscovery service. The supervising team would then investigate, and if there were grounds for further action, the matter would pass into the standard eDiscovery process; if not, the content would be deleted. This represents a practical and pragmatic application of the proactive application of eDiscovery technologies in conjunction with line of business applications, to deliver Discovery Risk Mitigation. In the final article in the series, we will consider the long-term, immutable retention of content and indeed historic eDiscovery matters, but if you’d like to discover more about implementing a high-value strategy for prevention rather than just cure, contact us here.