Technological innovation is bringing change at an unprecedented pace.
New business models are emerging and different types of firms are delivering financial services.
Regulators are supportive of digital innovation but are examining whether existing regulatory and supervisory frameworks are sufficient to manage the risks that could arise. Firms need to consider the impact of new regulation, both FS-specific and broader (such as the EU Digital Services and Markets Acts), as governments look to manage the risks of digital innovation across all sectors.
Regulatory sandboxes allow FinTech start-ups and other innovators to conduct live experiments in a controlled environment under a regulator's supervision. There are established sandboxes in several EU member states. The UK government is implementing recommendations from the Kalifa review of UK Fintech, including:
- An FCA digital sandbox focused on sustainability challenges
- An FCA scalebox — an enhanced sandbox to support FinTech firms looking to scale up
- A regulatory “nursery” for enhanced oversight of new authorised firms
The UK regulators will also work on a sandbox for financial market infrastructure (FMI) firms that are exploring how to use technologies, such as distributed ledger technology (DLT), to innovate in the settlement of financial securities.
Regulators are concerned about forays by large global technology companies (“BigTech”) into financial services. These companies already have a large and captive user base, so can rapidly build up market share in financial services. BigTech or large mixed activity groups (MAGs) could present risks due to their interconnectedness with incumbent financial services groups as third-party service providers, e.g. of cloud data storage and data analytics.
The Financial Stability Institute (FSI) of the BIS has noted the need for regulators to balance the efficiency and financial inclusion BigTech could bring, alongside possible risks to financial stability and consumer protection, including concerns around competition and data privacy. The FSI proposes that regulatory authorities should consider whether to recalibrate the mix of entity-based and activity-based rules in regulatory approaches or introduce a bespoke policy approach, including enhanced disclosures. Given the cross-border nature of these groups, global regulatory cooperation is key.
Within its call for evidence (PDF 301 KB) on digital finance, which is part of its work to inform the Commission's Digital Finance package (see the October 2020 issue for more detail), ESMA is considering whether BigTech and MAGs should be subject to specific supervision. It is also trying to gauge whether the reliance of financial firms on third parties, particularly technology firms, for the delivery of services is fragmenting value chains, introducing new risks not caught by the regulatory framework and creating cross-border supervisory challenges.
A focus on customers
ESMA is considering whether a more holistic approach is needed for the regulation and supervision of platforms. Platforms can market and provide access to multiple financial services from different financial and technology firms, across EU Member States or third countries. Elizabeth McCaul, Member of the ECB's Supervisory Board echoed some of these concerns: “Introducing more technology into the delivery of financial services cannot be allowed to become an unregulated back door or lead to an unlevel playing field. Activities conducted by large online platforms go beyond Europe's borders, meaning that a global perspective is needed to identify and manage risks in order to safeguard the interests of individuals and firms.”
Financial services firms providing or using Artificial Intelligence (AI) tools within the EU will be subject to the proposed “AI Act”, which will establish harmonised rules for a proportionate, risk-based approach to AI. The proposed rules include:
- Strict and mandatory requirements for “high-risk” AI, such as that used to evaluate creditworthiness or establish credit scores
- Limited requirements for specific types of AI, such as chatbots
- A ban on certain uses of AI, such as systems that deploy subliminal techniques beyond a person's consciousness
EIOPA's AI Governance Principles (PDF 1.2 MB) include proportionality, fairness and non-discrimination, transparency and explain-ability, and adequate human oversight throughout the AI system's life cycle. There are case studies to help insurers with implementation.
EIOPA is also seeking stakeholders' feedback on the risks and benefits of the use of blockchain and smart contracts in insurance. These technologies could increase process automation and efficiency, enhance customer experience and improve data quality. However, challenges could be the complexity and energy consumption of the technology, privacy and data protection, and integration and interoperability with legacy infrastructures and different blockchains.
The Commission is consulting on the introduction of a European Digital Identity Wallet, which would be available to both individuals and businesses. The personal information stored in the digital ID would help firms to carry out customer checks as part of their anti-money laundering duties. Consumers would have control over their own personal data and firms would not be able to require customers to use the digital ID.
Meanwhile regulators are reviewing the rules around financial services' advertising and marketing, and whether they should be updated for the digital age. The Commission has launched a consultation on the rules around distance marketing of consumer financial services as the current ones are nearly 20 years old. In the UK, a change in the law at the end of the transition period means that social media firms need to ensure that advertisements or marketing for investment activity are issued or approved by FCA-authorised firms i.e. are not scams. The FCA has been encouraged by firms' initial engagement but has warned that it will act if it does not see effective compliance.