In the world of capital project disputes, budget and time constraints often put pressure for legal strategies to rely on analyses which are based on recounted versions of events and only a sample of available data.
Though this approach is widely adopted, it exposes both the firm and its client to unnecessary risks: an opposing party needs only one conflicting conclusion from a small sample of information available on the project to cast doubt in legal proceedings.
Fortunately, there is a way to bolster one's legal strategy, leaving little room for dispute. Clarity is at one's fingertips like never before when leveraging data analytics and powerful visualization tools – all within reasonable time and efforts.
Law firms have traditionally relied on analyzing samples of construction project data to build a legal case for their clients. While cases can be won using this method, it is susceptible to scrutiny. In some cases, the data sample used for that analysis may not be truly indicative of the issue at hand. In others, that data may have been entered incorrectly (e.g., human error while filling in schedules or site logs), and later proven to be inaccurate when cross-checked with other samples.
In short, traditional methods leave room for multiple interpretations. Even clients with the best of intentions might not have all the information, or fully understand the specific combination of events that caused schedule or cost overruns. This can result in conclusions that conflict with those offered by opposing parties using different data sets.
Ultimately, law firms must rely on their client's version of the story and though the data samples provided may be accurate within their limited scope, they run the risk of weakening one's legal strategy.
Take, for instance, a hypothetical dispute involving schedule overruns concerning one structural component of an asset of a large construction project. Law firms do not often have time or resources to collect data on every section of the asset being built, and will instead opt to evaluate a specific section, activity type or timeframe which is assumed to be the one with the most issues. That data set might offer insights related to that one section of the build, but it may also prove inaccurate when compared to data taken for all sections of the assets, a greater number of activity types or the entire duration of the project.
The complexity of today's megaprojects means disputes must be handled with more sophisticated tools and processes.
Fortunately, advances in data collection and analysis mean we no longer have to limit our analyses the way we used to. Today, we can use advanced data collection, management, and analysis tools to create a data model using the entire population of data in the project documentation to develop a more accurate picture of the issue at hand. Once that model is built and populated, it can be analyzed in any number of ways, such as calculating everything from schedule delays to assessing project logistics and performance (congestion on site, the readiness of components, productivity, etc.).
How can this be achieved? Instead of manually extracting and transcribing data points from project documentation into traditional software used for calculations (such as Excel), algorithms can be designed to automatically extract relevant data from a multitude of documents and then categorize it following a set of rules defined by a team of experts. Calculations that the team wishes to apply on the data can also be programmed into the model and run to produce results quickly.
Such a model can come in very handy when multiple iterations of a document need to be compared, such as monthly project schedule updates, or when one wants to collect and cross-check data from multiple project documents to ensure only the most accurate version of events is presented.
Most importantly, analysis results can be produced quickly and efficiently once a model is up and running. This allows the expert team to change their analysis approach on a whim if needed, for example, following newly unfolding events.
It is one thing to use advanced models to build an air-tight case. It is another to argue that case in a convincing and meaningful manner.
By leveraging data visualization tools, analysis results can be translated into report- and presentation-friendly formats (e.g., a simplified 'schedule view' of activities). The results of certain analyses can also be displayed in a number of different ways without significant additional effort. This not only ensures that all parties arrive at the same conclusion, but makes it clear the data presented is a result of a comprehensive evaluation of facts.
Clarity is key in the high stakes world of capital project claims. Fortunately, sophisticated analytics can go a long way towards helping clients minimize the risk of having their legal strategies invalidated in court.
Organizations can do that by leveraging advanced data models that collect all project data and arrive at conclusions based on analysis, comparisons, and intelligent connections. Between cross-checking information validity from multiple sources and analyzing the entire population of data at our disposal, clients can align their strategy on a very clear and robust representation of the project's situation without fear of any 'surprises' coming up during potential litigation. What's more, this method can pave the way for possibilities to request deeper dives into certain situations within reasonable time and budget constraints.
Capital project disputes are becoming more complex. As such, successful outcomes depend on a client's ability to leverage powerful data analytics and visualization tools to make a stronger case in the courtroom.