On 19 February 2020 the European Commission presented their strategic plan to shape Europe’s digital future. The aim of this plan is to create a European society empowered by digital solutions, provide new opportunities for businesses, have a human-centric approach as a basis, and encourage the design of trustworthy technologies. Ursula von der Leyen, President of the Commission highlighted that “Digital Europe should reflect the best of Europe, be open, fair, diverse, democratic, and confident.”
Furthermore, the Commissioner for Internal Market, Thierry Breton, stated that European businesses and SMEs should have access to the huge wave of industrial and public data, creating added value for Europeans, including by developing Artificial Intelligence applications.
As previously announced in the 2020 Work Programme, the Commission has finally published the long awaited “European Strategy for Europe - Fit for the Digital Age”, in which it presents their approach to address challenges and opportunities brought about by digitalization and introduces new regulatory rules for the digital economy.
Over the next five years, the Commission will propose new legislation and actions along the following three key streams:
The White Paper on AI sets up policy options to support the twin objective of promoting the uptake of AI in the EU while ensuring that this technology is used in a manner that respects the EU values and fundamental rights (such as human dignity and privacy protection). The Paper analyzes and proposes key elements that are crucial for the future EU Regulation on AI; consolidated into a framework of excellence and trust. In particular, trustworthiness is a key requirement for the development of AI.
While AI brings enormous benefits it also poses certain risks, such as violation of the fundamental rights of privacy, data protection, non- discrimination, and further raises several safety and liability issues. This White Paper on AI constitutes a robust step towards the creation of a clear European regulatory framework addressing the risks created by AI based technologies.
The purpose of the White Paper is to set up policy options for AI, and invite all interested stakeholders to react to these options and contribute to the Commission’s decision making in this field. Companies have the opportunity to raise their voice and submit a response to the Commission in view of their business needs until 19 May 2020.
Apart from the existing EU laws covering inter alia consumer and data protection, the Commission intends to put in place new legislation specifically on AI, which will entail the following key elements:
A. The scope of its application will cover all products and services relying on AI.
B. The new legislation will apply only in high-risk AI applications. The Commission puts forward criteria under which AI applications can be considered as high or low risk, based on whether the sector and the intended use involves significant risks (critical sector and critical use). If an AI application falls under the category of high risk, it will be subject to a series of new mandatory requirements (listed below). Sectors which could be considered as high risk can include healthcare, transport, policing and the judiciary. For low risk applications the existing EU rules will apply, such as the General Data Protection Regulation (GDPR).
It will be a delicate balancing act to ensure the regulatory intervention with stricter rules is proportionate and targeted, without putting undue burden on the industry, ultimately becoming a hurdle to digital innovation and uptake.
As outlined above, stricter rules will apply for high risk AI applications only. These requirements consist of the six key features below:
C. Since a lot of actors are involved in the lifecycle of an AI system, the Commission will define and distribute the obligations addressed to the relevant economic operators. The general principle which will apply as a basis is that “each obligation should be addressed to the actor who is best placed to address any potential risks”.
In addition, the requirements will be applicable to all economic operators providing AI-enabled products or services within the EU, regardless of whether their place of establishment is in or outside the EU.
D. To ensure that AI technologies will meet the defined standards, high risk technologies will be tested and certified before they enter the market. An objective prior “conformity assessment” will be established to ensure that AI applications are robust and trustworthy. Such a conformity assessment will be mandatory for all economic operators and could include procedures for testing, inspection and certification (e.g. checks of algorithms and data sets).
Competent national authorities will monitor compliance and undertake post market controls; even sanctions could be imposed should certain technologies fail to meet the necessary requirements.
E. A voluntary labelling scheme will be in place for non-high risk AI applications. The voluntary labelling framework will be a legal instrument whereby the economic operators can choose on a voluntary basis to comply with requirements for ethical and trustworthy AI. Once compliance in this area is guaranteed, a label of ethical and trustworthy AI is granted; allowing the operators to signal that their AI enabled products and services are trustworthy.
F. An effective enforcement system for compliance with upcoming rules on AI will be introduced. This will require a strong system with public oversight and the appointment of national supervisory authorities.
The Communication from the European Commission outlines a data strategy containing policy and investment measures on the data economy for the years 2021 - 2027. The ultimate goal is to make the EU a global leader in a data agile economy while simultaneously respecting European values such as fairness, privacy, data protection and adopting a human-centric approach.
The EU can be considered to have a competitive disadvantage with respect to data access, making its position less powerful in the field of AI. Data is considered the fuel for AI branches, but up until now most of the available data is stored in non-European centralized or cloud-based locations. The European Commission wants data to be shared, stored and processed in Europe. A clear ambition is to seize the opportunities provided by the enormous value of non-personal data as an indispensable asset in the digital economy, thus, facilitating data access while ensuring its responsible use. A first step will be to create a single European data space where businesses will have easy access to a vast amount of high quality data, boosting innovation in a manner compliant with human ethics and privacy rules.
To realize this vision, the EU strategy has put forward several initiatives divided into four pillars. Among those objectives, KPMG Belgium considers the following to be critical: