Ask any financial services CEO if their organization is using or piloting artificial intelligence (AI) and you’re sure to get a positive response. In fact, in a recent global survey of financial services CEOs, just 1 percent admitted they had not yet implemented any AI in their organization at all.
Not surprisingly, financial services firms are becoming increasingly aware of the significant benefits that AI can deliver — from improving the customer experience and organizational productivity through to enhancing data governance and analytics. And they are beginning to realize how AI, machine learning and cognitive capabilities could enable the development of new products and new demand that would not have been possible using traditional technologies. Our survey shows that the majority are now implementing AI into a wide range of business processes.
While this is great news for financial services firms and their customers, the widespread adoption of AI across the organization also creates massive headaches and challenges for those charged with managing risk.
Part of the problem is the technology itself. By replicating a single mistake at a massive scale, a ‘rogue’ AI or algorithm has the potential to magnify small issues very quickly. AI is also capable of learning on its own, which means that the permutations of individual risks can be hard to predict. Whereas a human rogue employee is limited by capacity and access, an AI can feed bad data or decisions into multiple processes at lightning speed. And that can be hard to catch and control.
The ‘democratization’ of AI is also creating challenges for risk managers. The reality is that, with today’s technologies, almost anyone can design and deploy a bot. As business units start to see the value of AI within their processes, the number of bots operating in the organization is proliferating quickly. Few financial services firms truly know how many bots are operating across the enterprise and that means they can’t fully understand and assess the risks.
All of this would be fine if risk managers were positioned to help organizations identify, control and manage the risks. But our experience suggests this is rarely the case. In part, this is because few risk managers have the right capabilities or understanding of the underlying algorithms to properly assess where the risks lie and how they can be managed. But the bigger problem is that risk management is — all too often — only brought into the equation once the bot has been developed. And that is far too late for them to ‘get up to speed’ on the technologies and provide valuable input that can help implement effective controls from the outset.
It’s not just financial services decisionmakers and risk managers that are struggling with these challenges. So, too, are regulators, boards and investors. They are starting to ask difficult questions of the business. And they are not confident about the answers they are receiving.
There are five things that financial services organizations could be doing to improve their control and governance over AI.
While there are still significant unknowns about the future evolution of AI and its associated risks, there are a few things that we know for sure: financial services firms will continue to develop and deploy AI across the organization; new risks and compliance issues will continue to emerge; and risk management and business functions will face continued pressure to ensure that the AI and associated risks are being properly managed.
The reality is that — given the rapid pace of change in the markets — financial institutions will need to be able to make faster decisions that enable the organizations to move from ideation to revenue with speed. And that means they will need to greatly improve the processes they use to evaluate, select, invest and deploy emerging technologies. Those that get it right can look forward to competitive differentiation, market growth and increased brand value. Those that delay or take the wrong path may find themselves left behind.