close
Share with your friends

2020 was an eventful year. COVID dominated the headlines, pushing Artificial Intelligence (AI) a bit to the background in people’s minds. At the same time COVID forced many into remote work, only helping big data and analytics to gain more momentum in people’s lives[i][ii]. And regulators steadily made progress throughout the year defining guidelines for the application of Artificial Intelligence technology.

In June the Dutch government proposed guidelines for the public sector. In October EU Parliament approved an initial draft proposal for the regulation of Artificial Intelligence in general. In parallel supervisory authorities, research institutes, and big market players have been proposing their own guidelines for management of Artificial Intelligence risk in the private sector. In some cases KPMG Trusted Analytics assisted in these initiatives. For instance in the insurance sector[iii].

It appears that the attention of regulators is gradually broadening in scope from innovative Artificial Intelligence with adaptive abilities to the existing broader use of decision making support in organizations. Guidelines may cover data analysis tools and rule-based systems that are not considered AI, or even particularly innovative, by its users or makers. This does make sense to us. The way technology is used in automated decision making is after all more defining for a risk profile than the characteristics of the technology itself. The risks were already there. They are just becoming harder to control as decision making is increasingly based on complex technologies and complex ecologies of decision support tools. That is why KPMG Trusted Analytics plots risk by looking at complexity of the design, autonomy, and social impact as independent dimensions.

For many organisations, the past few years have been a period of experimentation and investment in AI. Data science teams have been set up, business cases are investigated, and applications are going live. But do you know what you already have? What end user analytics are used on the work floor? Do you know what your risks are? How are you going to reap the benefits of your investments in 2021? And how are you going to ensure that you do not unexpectedly become an unwelcome breaking news item? The KPMG Trusted Analytics team would like to suggest three New Year's tips for you to consider in order to prepare for a successful AI year in 2021.

Alexander Boer

Senior manager Trusted Analytics
KPMG Nederland
+31 (020) 426 2643
boer.alexander@kpmg.nl

Frank van Praat

Senior manager Trusted Analytics
KPMG Nederland
+31 (030) 658 2470
vanpraat.frank@kpmg.nl

1. List all your assets

It is not uncommon for business units or departments to start their own initiatives, using user-friendly machine learning development kits or with the help of external suppliers. The risks associated with end-user computing based on spreadsheets are well known. The damage caused by end user machine learning – Shadow Analytics – can be many times greater. Be aware that existing overviews of data processing activities in your organization, for instance pursuant to the GDPR, may be incomplete from the perspective of compliance to new guidelines. Different guidelines for AI or algorithms may moreover have different focus and define different scopes of applicability. Creating a comprehensive overview of data analysis and process automation assets within your organisation is an important first step to get a handle on responsible AI. After all, you can't control it if you don’t know it is there.

2. Comply or Explain

Some AI guidelines may simply demand compliance. Others will come with a principles-based comply or explain approach. To adequately control AI in your organization, it will be crucial to understand the specific risks per application, and how these translate to control requirements. Or to an explanation of why the risk to be controlled simply isn’t there. A relevant aspect in this is the complexity of the technology used. A supervised learning algorithm comes with other risks than, for example, a rule-based expert system. But perhaps even more important are the material characteristics that determine the risk profile of an AI application: the impact of the application on people and processes and the degree of autonomy that the system enjoys. A customer service chatbot is very different from a fraud detection algorithm. A structured risk assessment per AI application provides a solid basis for arriving at an appropriate and efficient controls regime for that application. That is why quality risk assessment is fundamental to the Trusted Analytics value proposition.

3. Keep costs under control

Bringing AI applications under a control regime costs money. This is AI’s cost of control, and this cost factor should be part of a business case for AI technology. Investing in setting up a clear AI governance structure is therefore not an unnecessary luxury. It is a necessary part of converting your investments in AI into a success. If you haven't already done so, include that investment in your 2021 budget!

 

The Trusted Analytics team of KPMG wishes you a successful and innovative 2021!