Until recently Artificial Intelligence (AI) was considered something that would be available on a 3-5 year time horizon. However, companies are exploiting AI today, KPMG research suggests that within the next three years the 100 largest companies in the US expect to increase their annual spending to $15 billion on AI capabilities.
According to KPMG’s Guardians of Trust report, 2018, gaining trust around AI is a top goal of leaders: 45 percent of surveyed executives say that trusting AI systems was either challenging or very challenging in their organisation. However the same report found that most leaders were unclear on what an AI governance approach should be. Some 70 percent say they don’t know how to govern algorithms.
Executives worry about the impact that bad or unethical machine-driven decisions would have on their brand reputation. They want to understand the ‘how’ and ‘why’ behind complex decisions made in so-called black boxes.
We expect to see new policy initiatives and regulations around data and AI: the end of self-regulation and the rise of a new oversight model. Investing in a control framework, including methodologies and a toolset that can help business users gain control over their AI programmes will be key to building trust.
Read our summary and key challenges for non-executives on AI here: Artificial Intelligence Challenges for non-executive directors