Financial services firms are embracing artificial intelligence and emerging technologies like never before. But are they ready to manage the risks?
Ask any financial services CEO if their organisation is using or piloting artificial intelligence (AI) and you’re sure to get a positive response. In fact, in a recent global survey of financial services CEOs, just 1 percent admitted they had not yet implemented any AI in their organisation at all.
Not surprisingly, financial services firms are becoming increasingly aware of the significant benefits that AI can deliver – from improving the customer experience and organisational productivity through to enhancing data governance and analytics. And they are beginning to realise how AI, machine learning and cognitive capabilities could enable the development of new products and new demand that would not have been possible using traditional technologies. Our survey shows that the majority are now implementing AI into a wide range of business processes.
While this is great news for financial services firms and their customers, the widespread adoption of AI across the organisation also creates massive headaches and challenges for those charged with managing risk.
Part of the problem is the technology itself. By replicating a single mistake at a massive scale, a ‘rogue’ AI or algorithm has the potential to magnify small issues very quickly. AI is also capable of learning on its own, which means that the permutations of individual risks can be hard to predict. Whereas a human rogue employee is limited by capacity and access, an AI can feed bad data or decisions into multiple processes at lightning speed. And that can be hard to catch and control.
The ‘democratisation’ of AI is also creating challenges for risk managers. The reality is that, with today’s technologies, almost anyone can design and deploy a bot. As business units start to see the value of AI within their processes, the number of bots operating in the organisation is proliferating quickly. Few financial services firms truly know how many bots are operating across the enterprise and that means they can’t fully understand and assess the risks.
All of this would be fine if risk managers were positioned to help organisations identify, control and manage the risks. But our experience suggests this is rarely the case. In part, this is because few risk managers have the right capabilities or understanding of the underlying algorithms to properly assess where the risks lie and how they can be managed. But the bigger problem is that risk management is – all too often – only brought into the equation once the bot has been developed. And that is far too late for them to ‘get up to speed’ on the technologies and provide valuable input that can help implement effective controls from the outset.
It’s not just financial services decision-makers and risk managers that are struggling with these challenges. So, too, are regulators, boards and investors. They are starting to ask difficult questions of the business. And they are not confident about the answers they are receiving.
There are five things that financial services organisations could be doing to improve their control and governance over AI.
The first step to understanding and managing AI is knowing where it currently resides, what value it currently delivers and how it fits into the corporate strategy. It’s also worth taking the time to understand who developed the algorithm (was it an external vendor?) and who currently owns the AI. Look at the entire organisational ecosystem – including suppliers, data providers and cloud service providers.
We have helped a number of banks and insurers identify and assess the capabilities and skills needed to create an effective risk function for an AI-enabled organisation. It’s not just about risk managers having the right skills. It’s also about becoming more agile, technologically savvy and commercially focused. Particular attention should be placed on the development of sustainable learning programs that include the theory, practical and contextual capabilities required to encourage continuous learning.
Data is a fundamental building block for getting value from new and emerging technologies like AI. And our experience suggests that most financial institutions will need to continue to invest heavily into ensuring their data is reliable, accessible and secure. This is not just about feeding the right data into the machine; it is also about helping to mitigate operational risks and potential biases by verifying the quality and integrity of the data the organisation is using.
While some internal audit functions and risk managers are using existing frameworks such as SR 11-7 and the OCCs Risk Management Principles as a starting point, we believe that AI professionals, risk managers and boards will need to develop a purpose-built risk and control framework (figure 1) that can help mitigate data privacy, security and regulatory risks across the entire lifecycle of the model. For more details on KPMG's Risk and Controls framework, view the AI Risk and Controls Matrix (PDF 775 KB).
The reality is that AI – once fully realised – will likely extend across the entire culture of a financial services firm. And that will require decision-makers to think critically about how they ensure they have the right skills, capabilities and culture to encourage employees to properly operate, manage and control the AI they work with. More than just new technology skills, organisations will need to consider how they transform the organisational mind-set to apply a risk lens to AI development and management.
While there are still significant unknowns about the future evolution of AI and its associated risks, there are a few things that we know for sure: financial services firms will continue to develop and deploy AI across the organisation; new risks and compliance issues will continue to emerge; and risk management and business functions will face continued pressure to ensure that the AI and associated risks are being properly managed.
The reality is that – given the rapid pace of change in the markets – financial institutions will need to be able to make faster decisions that enable the organisations to move from ideation to revenue with speed. And that means they will need to greatly improve the processes they use to evaluate, select, invest and deploy emerging technologies. Those that get it right can look forward to competitive differentiation, market growth and increased brand value. Those that delay or take the wrong path may find themselves left behind.
©2020 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
For more detail about the structure of the KPMG global organisation please visit https://home.kpmg/governance.