The Financial Stability Board has published a paper on artificial intelligence (AI) and machine learning in financial services.
The paper (PDF 650 KB) follows the same approach as many other recent papers from international and national standard setters on Fintech innovations - (i) these innovations are of potential value to firms, consumers and supervisors; (ii) but they also bring risks to firms, consumers and financial stability; and (iii) some regulatory interventions may be required, but it remains unclear what form such interventions might take. Indeed, it is difficult to see how regulation could mitigate some of the risks arising from AI and machine learning.
AI and machine learning have already been adopted in some areas of financial services, including to assess credit quality, price and market insurance contracts, automate client interactions, optimise capital, identify trading opportunities and optimise trading execution (for example by analysing the market impact of trading large positions), and back-test models.
Meanwhile, the RegTech and SupTech applications of AI and machine learning could help to improve firms' regulatory compliance (for example to undertake KYC checks) and increase supervisory effectiveness (including for money laundering and fraud detection, and the identification of suspicious trading patterns).
On financial stability, the paper focuses on:
Risks to individual firms include:
Although not the main focus of the paper, risks to consumers also come across strongly:
Supervisors can be expected to look closely at how well firms control and mitigate the risks arising as firms make increased use of AI and machine learning, not least in terms of governance, understanding and relationships with third party providers.
Firms are also likely to be required to demonstrate that they understand and can manage effectively the risks that greater use of AI and machine learning might pose for some groups of consumers.