We’ve only just scratched the surface of artificial intelligence (AI). But already, it poses tough questions for internal audit.
Technology has revolutionised enterprise processes, delivered real-time management information and enhanced governance. Now cognitive technologies like machine learning and predictive analytics are poised to create whole new ways of working. What are the skills internal audit (IA) needs to judge the risks inherent in these new systems? And how might a smarter understanding and use of breakthrough tech give IA a louder voice in strategy?
By 2006, the web was well established, the smartphone was emerging and enterprise resource planning (ERP) was the norm. Yet the past ten years has seen extremely rapid uptake of a raft of new technologies - opening up new opportunities for businesses.
Data analytics, cognitive systems, cloud computing, the Internet of Things and a host of other tech that didn’t exist ten years ago are now core to strategic planning.
Internal Audit (IA) teams need to ensure they keep pace with these technologies. In part that’s to make sure their capabilities are fully exploited while managing their risks. But as IA strives to be more strategic, it must be able to look ahead and advise on how the inter-relationship between technologies might shape an organisation’s approach to risks and rewards.
Current breakthrough technologies provide a tough enough challenge. Cloud computing, for example, seems logical enough from an operational perspective. But it brings with it some tough questions about risks and dependencies that even the technologists and lawyers wrestle with.
Artificial intelligence – including cognitive technology, natural language processing and machine learning – and data analytics are now becoming compelling investments for boards.
They allow organisations to look at structured and unstructured data to improve decision-making and provide better real-time analysis and monitoring. These technologies also enable management to look at external information – market, customer or competitive data – to allow the enterprise to react more quickly to new developments.
But they’re also incredibly sophisticated; create new market and systems dependencies; and in many cases act autonomously enough that conventional monitoring approaches start to look inadequate.
“Internal audit has a responsibility to look at what technologies the organisation is using, and question whether these IT capabilities match the needs and risk appetite of the business,” says Paul Holland, who leads on data analytics enabled IA for KPMG in the UK. “And the same applies to internal audit’s own use of technologies. Do the tools used to monitor risks meet the organisation’s emerging requirements?”
In other words, familiarity with breakthrough innovations isn’t just about understanding emerging risk – it’s about managing it.“Cognitive technology and machine learning offer the potential to provide far better real-time management information and greater visibility into operations,” adds Shamus Rae, Head of Innovation and Investments at KPMG in the UK. “Internal audit needs to apply such best-in-class technologies to improve governance and controls.”
This isn’t the stuff of science fiction movies any more. Almost every industry sector is seeing the application of cognitive technologies. In the oil and gas sector, for example, cognitive automation is informing decision-making around exploration. Insurance companies are using these technologies to improve and enhance their claims management.
Telecoms firms are using it to improve automated responses to customer queries. Pharmaceutical firms are better targeting their R&D spend. And there are well-established examples like high-frequency algorithmic trading and a slew of consumer-facing services such as Siri.
“Cognitive technologies are becoming increasingly prevalent, and internal audit needs to understand how their organisations manage the risks around them,” says Ian Arnold, Associate Director at KPMG in the UK. “Then machine learning adds an extra dimension to the challenge.”
Where machine learning is different is that the software is reprogramming itself. This introduces a whole new set of uncertainties and risks to monitor. For IA the question becomes, who is accountable? (And, perhaps, “where is the off switch?”)
As well as challenging our understanding of systems, and creating new tools for risk management, cognitive technologies might also give IA a much louder voice in strategic decisions around investment and risk appetite. For instance, take a probability based cognitive system that gives the right answer in 97 percent of cases. That could be much more accurate than any human member of staff. But will the board, management and even frontline staff see 97 percent as high enough?
This is no longer a philosophical thought experiment. Tesla’s autopilot software has a track record of one death per 130 million miles. That’s safer than human drivers (in the US, there are about 1.1 deaths per 100 million miles driven). But how comfortable would you be if the driver of your Tesla decided to take a nap cruising down the motorway at 70mph?
Or how do you apply change control over algorithms created by machine learning? The board may not know these even exist, much less what their impact is. How would management test these for design and operating effectiveness? Is internal audit ready for such questions? We think there are ways they can be.