2019 was an eventful year in the field of artificial intelligence (AI). The continuous flow of media coverage about the impact that artificial intelligence has, and will have, on society shows that the hype has not yet passed its peak.
In fact, we are only at the start of a transition in which AI will play an increasingly prominent role in people's lives. A number of politicians have now also joined in the discussion. And it's more than just words. For many organisations, the past few years have been a period of experimentation and investment in AI. Data science teams have been set up, possible use cases are being investigated, and the first applications are going live.
But how are you going to reap the benefits of your investments in 2020 and how are you going to ensure that you do not become a trending topic due to derailed algorithms? We would like to suggest three New Year's tips for you to consider in order to prepare for a successful AI year in 2020.
Within many organisations, the development of AI has a decentralised character, which is understandable given the diverse application fields of this technology. It is not uncommon for different business units or departments to start their own initiatives, whether with the help of external suppliers or not.
In addition, the availability and user-friendliness of development software makes experimenting with AI increasingly accessible to users without a background in data science. The risks associated with end-user computing based on spreadsheets are now well known. However, the damage caused by incorrect application of AI can be many times greater. Creating an overview of the AI initiatives within your organisation is an important first step to get a handle on AI. After all, you can't control what you don't know.
A customer service chatbot is very different from a fraud detection algorithm, although both are AI applications. In order to adequately manage AI within your organisation, it is therefore crucial to make a distinction on the basis of the specific risks per application. A relevant aspect in this is the technique used, because a supervised learning algorithm has other areas of interest than, for example, a rule-based expert system.
But perhaps even more important are the underlying characteristics that determine the risk profile of an AI application: the impact of the application on people and processes, the degree of autonomy that the system enjoys, and the complexity of the algorithms used. A structured assessment of the risks per AI application provides a solid basis for arriving at an appropriate and efficient control regime.
In traditional information technology, we learned many years ago that IT problems often require a solution that goes beyond the IT department. However, thinking about governance around AI is still in its early stages in many organisations. The roles and responsibilities have yet to be defined and an overarching control framework is lacking. This is risky for those responsible at C-level and difficult for data scientists looking for anchor points on which to base their design decisions. Investing in setting up a clear AI governance structure is therefore not an unnecessary luxury. Rather, it is indispensable in order to convert the investments in AI into a success in a manageable way. If you haven't already done so, include that investment in your 2020 budget!
The AI in Control team of KPMG wishes you a successful and innovative 2020.
Would you like to know how KPMG can help your organisation with AI related questions? Please contact Frank van Praat, senior manager AI In Control.
© 2021 KPMG N.V., a Dutch limited liability company and a member firm of the KPMG global organization of independent member firms affiliated with KPMG International Limited, a private English company limited by guarantee. All rights reserved.
For more detail about the structure of the KPMG global organization please visit https://home.kpmg/governance.