close
Share with your friends

Artificial Intelligence has huge potential for a range of applications, including by banks as a tool for automating data-intensive activities, and by supervisors as a way of enhancing their banking supervision capabilities. However, as with any innovative technologies used by banks, supervisors are also eager to understand how banks are managing the associated risks and establishing human teams with the skills required to tackle them.

The rapid advance of digitalisation and increasing use of technology in our everyday lives means that individuals and institutions are becoming more and more vulnerable to cybercrime.  Hardly a day goes by when we do not hear of a major cyberattack in the news, or of troubling experiences affecting colleagues, friends or family members.

Few organisations are more sensitive to these threats than banks and it is no surprise that for the second year running the European Central Bank (ECB) has identified IT and cyber risk as one of its key Supervisory Priorities for 2020 (PDF 4.61 MB). The ECB will continue to address this topic by carrying out IT on-site inspections and requiring significant banks to report significant cyber incidents under the SSM cyber incident reporting process. Nor is the supervision of IT risks limited to banks; it follows a comprehensive approach, encompassing the endpoints of payment systems and markets infrastructures. This encourages banks to cooperate with a wide range of stakeholders (both internal and external) as exemplified in the TIBER-EU framework, market-wide crisis communication exercises such as UNITAS, and the publication of the final version of the Cyber Resilience Oversight Expectations which contain best practices for developing cyber resilience in the financial sector.

Fortunately, the growing threat from cybercrime is matched by an expanding range of cybersecurity tools. In particular, artificial intelligence (AI) – which continues to generate headlines for its potential applications in financial services – shows increasing potential in the fight against cybercrime, money laundering, terrorist financing, mis-selling and fraud. The ability of AI to quickly spot patterns in large and unstructured datasets not only has huge potential to enhance the speed and accuracy of crime detection, but also to automate and enhance data-intensive activities such as regulatory reporting, thereby lowering risks whilst reducing costs.

Indeed, some supervisors have already set up specific SupTech roadmaps, institution-wide digital transformation and data-driven innovation (DT&DI) programmes, or other initiatives such as accelerators, innovation labs, and external partnerships. We are also seeing increasing use of ‘tech sprints’ – intensive workshops sometimes known as ‘hackathons’ – that bring industry players and supervisors together to explore specific challenges or use cases.

With these goals in mind, the ECB has set up its own dedicated SupTech Hub to consider the use of AI and other technologies in banking supervision. The Hub aims to connect internal and external stakeholders, helping national supervisors to understand the latest tech and providing analytical support to other functions. Some of the Hub’s current projects include the use of natural language processing to improve the search function for unstructured information, and the introduction of machine learning to improve work-intensive processes.

Of course, the use of any new technology poses potential challenges, and understanding how banks are actually using AI is high on the ECB’s agenda. So far, most banking applications for AI have focused on automating repetitive processes such as data reconciliations. In contrast the use of deep learning, which allows algorithms to change the way banks operate with limited human input, is still in its infancy. Some of the potential risks that could arise from AI – whichever uses it is put to – include:

  • Data bias: The risk of statistical errors or interference arising from the inherent features of datasets.
  • Privacy breaches: The desire to reduce risks must not override the protection of sensitive personal and commercial data.
  • Data loss: Shared criteria for data preservation will be vital in maintaining the accessibility of big data.
  • Regulation: GDPR sets out a number of limitations on automated decision-making, which could limit the efficiency and efficacy of AI.
  • Malicious manipulation: As the use of AI grows, the potential for malicious manipulation of big datasets may also increase.
  • Opacity: The more advanced AI algorithms become, the harder it can be to understand and monitor the conclusions that they draw.

Managing these risks will require banks to have an effective governance framework. Establishing teams with a careful combination of scientific, engineering, statistical and economic skills will also be vital, as it is clear that AI programmes are only as good as their underlying data and their human interpretation. AI may still be in its early days, but using and managing it effectively will be of growing importance for banks and supervisors during the decade that lies ahead.