Sander Klous and Martin Sokalski | 14 August 2019

How should we manage the expectations around artificial intelligence (AI) when, every week, a new academic, business or media report raises the bar on how AI will transform our world?

In many ways, the architects of AI-driven business are facing the same challenges as past proponents of game-changing technology. To win over skeptics, they feel the need to paint a perfect picture of how their new technology will make life better for all when, in fact, its capabilities cannot be fully judged until they have had time to be tested by all the variables society will throw its way. There’s a reason, after all, nearly all breakthrough technologies experience what the Gartner Hype Cycle calls the “trough of disillusionment”.1

To help accurately manage expectations and create the desired result, organizations should be realistic about AI’s potential as well as its limitations.

For AI, the stakes involved in managing expectations are very high because most people are not going to understand how AI works or how it comes up with its recommendations. If they don’t understand what AI does, and in turn don’t trust it, they probably won’t want to use it. That trust will grow when people gain more positive experiences with AI.

So, if trusting AI through good experience is more important than understanding the intricacies of its inner workings, how can organizations best manage expectations of what AI can currently deliver in order to build and maintain trust? Here are some common sense approaches that can help manage expectations and deliver AI success.

Don’t overpromise

To generate positive experiences, AI must exceed expectations. A simple way to achieve this is not to overpromise in the first place. Consider the lessons of the cyber security industry where, at first, the sector was aiming to make enterprise information networks completely secure. That was quickly proved to be an impossible mission and cyber security technology proponents now acknowledge that every system can be hacked. Admitting that technology alone couldn’t solve the entire problem ultimately helped organizations establish the necessary processes and systems to properly prioritize protection of the most important parts of the organization.

Be honest about AI bias

AI insights are only as good as the quality of data algorithms have to draw on. Currently, a great deal of the data AI algorithms depend on come from human activity. As a result, that data will have an inherent element of human bias built in and could compromise the supposed agnostic analysis that AI promises.

Take the example of the City of Amsterdam where KPMG’s solution, AI In Control, has been helping to develop a risk management framework that oversees the integration of AI into the issue management system for public spaces. This allows residents to file service requests online for issues such as trash on the street. The algorithm identifies the issue type, which municipal service unit should respond and where they should prioritize.

So far, two elements of potential bias have already been identified that could lead to undesired side effects when not corrected. First, the people living in the more affluent parts of the city tend to submit the most complaints. Based on only that information, the algorithm might conclude that these parts of the city need more attention, leading to an undesirable bias in addressing complaints. The second element is related to the performance of the algorithm, which is better at understanding complaints when the language being used to register them is expressed clearly. This means that the so-called ‘first time right percentage’ (i.e. the fraction of complaints that are directly sent to the correct government official) is higher for these complaints, again leading to an undesirable bias. Blindly relying on the results of this algorithm would have caused bias that would be difficult to explain to the citizens of Amsterdam. Organizations need to incorporate methods and frameworks (like The AI In Control framework) to introduce checks and balances that capture such mistakes before they take effect.

Define what success and failure look like

Setting unrealistic expectations will line your organization up for failure. As noted above, there’s no point setting a goal that involves algorithms never making mistakes or demonstrating no bias because when a mistake does occur or bias is detected, trust in the AI will diminish.

By setting achievable expectations, organizations can have an honest debate about what success looks like and help confront the imperfection of today’s solutions, which in turn will help with the adoption of algorithms.

A good first step is to know where the AI sits in your organization. By conducting an inventory of where and how algorithms are being used in operations, organizations can get a sense of the financial, legal and reputational risk to the business if AI makes a mistake and what the impact is on the business if the algorithm makes a wrong decision or delivers flawed information.

Once organizations know the potential risks then they can start identifying and prioritizing their AI ‘crown jewels’ — the algorithms that have a critical effect on the business and how much money and resources it will take to ensure they behave properly. Simply put, some algorithms will require more attention than others, because the impact of errors made by those algorithms will be higher. This helps create a trust/risk hierarchy within the organization.

Build trust through insights and governance

In an imperfect AI world, organizations need to be able to implement proper governance, monitoring and controls to deliver the level of transparency that helps individuals trust the decisions being made by the algorithm. That way they will know if an algorithm makes a mistake and what actions are needed after that discovery.

Creating the right checks and balances to manage expectations accurately, and so build trust, is a multi-step process of good governance. It involves the operational layer where the data scientists developing the AI are subject to regular peer reviews and risk assessments. It also involves risk management control protocols to establish what level of algorithmic risk is acceptable and what that looks like in regard to business performance and reputation. Once those protocols are established, organizations need to build a framework so they can monitor these control objectives and act if the algorithm puts the already defined expectations of success at risk. This framework needs to align with the agile way of working when algorithms are developed and put in operation to help ensure that risk management doesn’t slow down innovation.

This way, by being realistic about AI’s potential but also its limitations, and by developing a strategy and processes to accurately manage expectations, organizations can create a positive AI experience for its stakeholders and build trust in an algorithmic approach to business — even as they continue to learn from the mistakes being made along the way.

Footnote

For more insight on data-related topics, please visit our data-driven technologies article series page.

Connect with us