Artificial intelligence (AI) permeates our lives. We use it to navigate on the road, fly safely and keep our inboxes clear of spam. Some of us need it to spell correctly. And we all feel its presence in targeted advertising.
Most of the time, AI is working away in the background, just beyond our immediate perception. Except when something goes wrong. Have you ever blindly followed your GPS the wrong way up a one-way street? Hopefully, your human brain kicked in and took back control to avert disaster.
But what about in a business context, where AI is operating behind the scenes and on a potentially massive scale? One example is intelligent automation (IA), which happens when robotics process automation (RPA) technology is combined with AI to support cognitive decisions or actions. By effectively mimic the decision-making process of a human operator, IA promises efficiency gains that translate into impressive potential for cost and time savings.
As is often the case with rapid technological change, regulatory provisions lag behind when it comes to AI, yet 60% of business leaders already see regulatory constraints as a barrier to implementing AI. Sooner or later, the topic will certainly move further up the regulatory agenda. Leaders should be anticipating tomorrow’s requirements now to future-proof their business. They also need to consider how they can safeguard the trust of other stakeholders.
The widespread use of AI will make it imperative – and more difficult – to ensure that algorithm-driven processes produce trustable outcomes. Non-compliance with internal or external requirements, or failure to consider all relevant aspects of compliance, could lead to ineffective products and solutions, or regulatory and market repercussions. What can companies do to avoid the introduction of bias (e.g. gender, racial, etc.) when decisions are made by an algorithm? And how can they reassure stakeholders that they’ve considered these points when adopting AI solutions?
As algorithms and deep learning evolve, systems will become even more complex – ultimately to the point that the human mind has difficulty keeping up. The nature of this increased complexity is also self-perpetuating. Although it might appear – and is often touted – as a simplification, AI can leave companies struggling with what is known as “technical debt”. In other words, payback for quick fixes comes further down the line. If issues are ignored, the interest to pay on that debt is even higher – from embarrassing malfunctions to lost revenue.
Let’s return to our GPS system example. What if that one-way street was a massive production plant, a nuclear power station or an airplane mid-flight. You’d want there to be a manual override, a real “driver”. That is why organizations need to be certain that they have the right safeguards in place to avoid such situations. Or, when worst comes to worst, companies need to be in control of controls and able to roll back to a manual process execution.
The right controls can’t be put in place overnight. To help organizations manage and evolve AI responsibly, KPMG has introduced AI in Control, a framework supported by a set of methods, tools, and assessments to help organizations realize value from AI technologies while achieving imperative objectives like algorithm integrity, fairness and agility.
Business leaders that recognize the need for controls and value of trust can unleash their algorithms to make innovative exploratory journeys – but with humans safely in the driving seat.
Read our brochure: Unleashing the potential of AI - from a place of control (PDF)