Artificial intelligence is not a one-way street when it comes to risk management. Its undoubted benefits need to be weighed against the additional risks it can create, especially for established companies unfamiliar with the nature of cognitive computing.
My sons were bickering the other day, and one accused the other of being a robot. “No I’m not!” was the reply. “Prove it, punch Dad” came the challenge. The response - “I won’t” – produced a triumphant: “Ha, you’re a robot because they’re programmed not to hurt humans!”
Students of Thaler may detect some confirmation bias in this exchange, and statisticians might lament the failure to disprove a null hypothesis. But the argument got me thinking about artificial intelligence and the question of “who checks the robots” in a rapidly changing world.
Most businesses are drawn to AI by its potential to reduce costs, improve processes, enhance customer experiences and strengthen reliability. So far relatively few are using AI to enhance risk management, although CROs are waking up fast to AI’s potential to reduce errors, eliminate bias and create new risk management tools. Click here to explore the advantages AI can bring to organisations in greater depth.
However, AI is not a one-way street when it comes to risk management. Its undoubted benefits need to be weighed against the additional risks it can create, especially for established companies unfamiliar with the nature of cognitive computing.
To illustrate this, consider the innovation labs and centres of excellence that many firms set up when experimenting with robotic or cognitive automation for the first time. These hubs test AI applications against internal and external demand, user preferences, economic criteria and regulatory compliance.
A typical innovation lab might seek to identify a large number of potential use cases, to quickly test a broad selection, to learn from those that fail and then to push the handful of ‘winners’ to scale. One of the many benefits of this approach is that it allows a small team to develop a ‘fail fast’ mentality within an established company that may have a traditional culture of punishing failure. This helps to explain why many banks are setting up innovation labs with a ‘licence to fail’.
Unfortunately, the same factors that make ‘failing fast’ a good approach to innovation can also present challenges for risk managers.
First, companies need to ensure that risks are kept within agreed tolerances, even if they are managed in a different or innovative way. Second, the nature of AI itself presents new challenges, including the need to monitor and evaluate automated processes as AI programs learn over time. And a third challenge comes from the need to oversee external collaborations with academia, fintechs and commercial partners. Requests by external developers to access in-house technology platforms are a perfect example of the risk management headaches that collaboration can pose.
Overall, managing the risks associated with AI innovation requires companies to strike a tricky balance. There is a clear need to avoid stifling the potential benefits of AI through excessive caution. But managing risks effectively is crucial to achieving early successes that can win internal support for AI and unlock long term gains.
One upshot of this is that, ironically, managing the risks of AI will only reinforce the importance of expert human oversight. In fact, future best practice in risk management is likely to blend human and machine intelligence. Just as effective AI requires significant human support, human decision-making will be enhanced by good AI.
In the end, many organisations are likely to find that their ability to realise long term benefits from AI will depend on their ability to manage short term AI-related implementation risks. Join us at KPMG’s flagship Digital Transformation event to cut through the hype on AI and secure a place in our tailored breakout session on how to use AI to manage risk across the organisation