In the previous blog post of the series, we explained what could be different issues encountered in the use of AI and analytics. Based on our experience, we believe that organizations should take a systematic approach in order to gain trust in their analytics. This approach is based on four key anchors of trust: quality, effectiveness, integrity and resilience.
Data of poor quality can have dramatic consequences. Think for instance about a European travelling company with its marketing campaign based on pictures labelled inappropriately. The website ended with pictures of big marble hotels and air conditioning instead of “dream” pictures. Pictures were labelled by hand by people with a different vision of the best holidays than the average European. The quality of the data and the processes used for the collection of this data is therefore decisive. For this reason, quality checks should be implemented. However, a recent KPMG study reported that less than half of organizations worldwide are currently carrying out such quality control procedures.
Having proof of the effectiveness of the analytics is key to strength trust. However, measuring effectiveness can be extremely difficult. Let’s see for instance customer churn prediction model. How to correctly assess whether it is your model that is effective or that is the actions you took? Moreover, as stated in the previous blog post, depending on the context (medical field vs. alcohol tests), you can prefer to optimize the false positive rate at the expense of the false negatives (or vice versa).
The shareholders will not invest in a company if they are not confident in the integrity of the AI solutions built by the company. Imagine a GPS sending you via roads in which gas station of a certain brand are located instead of via the shortest way. Different questions should require further attention of the organizations: how transparent are the processes? Is everything in line with legislation? Are ethical boundaries crossed?
And finally, the resilience of the system and the models is also a critical aspect. Customers will not use a model if it is not flexible enough, and so not optimized over the long term. A traditional GPS in a car will be rapidly outperformed by a smartphone GPS application that adjusts quickly to new contexts and road modifications. A trust model will need to change or adapt its function rapidly with the emergence of new technologies, the changes in regulation, the evolution of the society…
We therefore propose several ideas to close the trust gap in D&A. As a first step, organizations should start by assessing their faith in D&A. An initial assessment will allow them to define where trusted analytics are most critical to their business and to focus on these areas. Secondly, they should clarify and align goals; why do we collect data? Why do we perform analytics? Answers to these questions should be clear. Two other ideas are increasing internal engagement and developing an internal D&A culture. The trust gap could be closed by supporting multidisciplinary teams with IT, data scientists and stakeholders working to align key business goals in the building of analytics. Encouraging transparency in D&A and adopting a portfolio approach are two other potential lines of inquiry. As a last piece of advice, and most importantly, be innovative, encourage experimentation without the fear of failure.
"Only those who dare to fail greatly can ever achieve greatly."
-- Robert F. Kennedy