Internal auditors need to think about the ethical issues surrounding new technology. But how and why now?
Would you get yourself microchipped? It’s a relatively painless process, a quick injection to insert a chip no bigger than a grain of rice under the skin.
In return, you receive a hefty dose of convenience, no longer needing to carry around a cumbersome bunch of keys or a wallet or purse.
Thousands of employees in Sweden think it’s a great idea – they have had these microchips implanted. One wearer calls it her “electronic handbag”.
The procedure itself may be no more than a sharp scratch but, for some, the concept of microchipping heralds a whole world of pain – the alarming descent into the elimination of privacy. The question is not so much can we do it, but should we do it?
This is just one example of how society needs to think about the ethical issues surrounding new technology before we implement them.
For internal auditors, these types of issues are about to land on their desks.
While the chip programme in Sweden has relied entirely on volunteers, future programmes implemented by others may not.
What are the implications for society if one employer decides to make microchipping a prerequisite to employment?
What if governments insist on it for their citizens to access benefits? What if a government decides to only roll it out to a certain section of the population, such as refugees or asylum seekers?
It may seem like science fiction, but society is already grappling with the implications of new technology. Take Facebook, for instance, is it a technology platform or a media company?
And why is something that started as a way of connecting friends now being used by some people to spread misinformation, incite violence or influence elections?
Companies – and the internal audit function – are going to have to get a grip on these topics, which can be a minefield to navigate.
But there are critical questions that can be addressed to avoid things blowing up in their faces.
For instance, who are the investors in the company? Are their values aligned? Or could they be a source of reputation risk?
Some stakeholders might be alarmed if they find out the company’s funded by a state with a questionable human rights record, for example.
Another difficult question is whose job is it to come up with guidelines that everyone can follow? Is it the responsibility of supranationals – the UN for instance – or is it up to governments. How much responsibility should company boards take? Where it stands now the c-suite doesn’t see ethics as their responsibility.
There have been some small steps taken in the right direction.
The Partnership on AI is an organisation that aims to bring together those working in the field to examine the risks of AI as well as to identify its promise. Open AI is a research company sponsored by, among others, Elon Musk and Peter Thiel, and has a similar ambition.
The time may come very soon when auditors will be asked to put an official opinion on the AI ethics in an organisation and provide assurance.
The framework under which this might happen is unclear. In the future, customers might demand some sort of AI kitemark or smart contact, something that assures them that what they say and do is living up to their values and ethics.
And if there is an opinion to be given or assurance to be provided, then internal auditors need to be part of this framework discussion.
In the current climate, much of the discussion about AI concerns algorithmically powered robots either killing us or taking all our jobs.
But as those algorithms become more prevalent in our everyday lives, the public conversation will move much more towards ethics, values and data. And auditors will need to take a front-row seat.