Navigating bias and supremacy in AI - KPMG Australia
close
Share with your friends

Navigating bias and supremacy in artificial intelligence (AI)

Navigating bias and supremacy in AI

With the development and use of artificial intelligence on the rise in Australia, what are the key ethical considerations business leaders should keep top of mind?

1000

Also on home.kpmg

Close-up of electronic circuit board

It’s difficult to imagine a conversation about the frameworks for AI without questions of bias and supremacy coming to the fore. Together these two concepts represent the most pressing ethical issues raised by the development and integration of AI today. Popular to contrary belief and fears of human redundancy, human beings have an increasingly important role to play in ensuring the responsible development, oversight and use of AI. As AI investment reaches unprecedented levels, discussion around this technology needs to move beyond dinner party hypotheticals and instead actively address a host of complex and critical questions.

Bias is said to be hardwired into the human condition. Researchers have identified over 180 species of human biases capable of affecting decision-making.1 If developers are biased, or if there is bias in what they have been asked to do, then bias in the development of algorithms or the results they generate is unavoidable. Confirmation bias, groupthink, mere exposure effect and anchoring are just some examples of the kinds of unconscious, cognitive behaviours that make bias in AI inevitable.

While AI is set to enhance process efficiency in workplaces and make way for deeper intellectual engagement, there is a live risk that AI and machine learning programs will operate to the advantage of dominant groups at the expense of others. A current example of key concerns are already evident in the context of the institutionalisation of exclusion in recruitment processes.

A recent study published by McKinsey asserted that synthetic ID fraud is the fastest-growing type of financial fraud in the United States, accounting for 10 to 15 percent of losses in a typical unsecured lending portfolio.2 Potential solutions could include leveraging third-party data to test the granularity and accuracy of information available in instances such as this, but for some individuals, low depth or consistency scores won’t necessarily indicate high-risk profiles. For example, an individual seeking asylum or leaving an abusive relationship may have an incomplete data profile for the purposes of any model. These examples demonstrate the increasing impact of AI on our control of what is real.

The ways in which AI can benefit society however, far outstrip the risk of bias, and discussion should not be limited by fear and apprehension. Rather, it should be oriented around ensuring global best practice when developing AI models to ensure ethical considerations remain a core focus, while enabling a competitive environment for the development of Australian AI. Corporate social responsibility groups will play a critical role in determining what frameworks are put in place to govern the ethical development of AI. Business sector leaders will be called on to demand the ethical development and integration of AI based on balanced data and responsible training sets, accompanied by robust review and governance processes.

AI supremacy, on the other hand, is a potentially more philosophical question. Specifically, how much control should humans retain in the decision-making process? Recent proliferation in AI investment is promising, however there is a significant risk that we will start handing over too much power to AI before we have a solid understanding of the technology and its ultimate capabilities. As mentioned above, machine learning technologies will irrevocably transform our social fabric, and developers run the risk of creating technology that does not operate on principles of fairness and equality for all.

Transparency will be key to ensuring trust from the public in order for AI to be established and maintained. The parameters placed on a bot’s ability to make independent and final decisions will be determined at the limits of the general public’s trust in developers and regulators, and create the need for a degree of human oversight over the technology and its results. Confidence that those in control are acting ethically and due consideration is being given to the values and principles in which this technology will find its foundations, are essential to enable Australia as a nation to make the most of AI.

As we progress to invest in and explore these opportunities, we need to be mindful of the ethical risks and challenges, particularly to our social values. The discussion going forward needs to be pragmatic and pre-empt safeguards to protect people, data, privacy and security. Designing the framework and principles in advance is critical; a traditional system of penalties and consequences does not recognise the unique nature of AI, and will not be sufficient to build trust and confidence in it. Australia’s business community will play a key role in developing a shared framework of principles for the safe and ethical use of AI.

On 7 of February 2019, KPMG will be convening the inaugural Future AI Forum Australia. This initiative is designed to drive collaboration, accelerated thinking, genuine breakthroughs and ownership of solutions to these questions. The founding members will bring a holistic, community view to the issue, and will include leading academics, researchers, corporates, consumer groups, Federal government representatives, and relevant charities – all deeply involved in AI, its use, advantages and disadvantages.

Connect with us

 

Want to do business with KPMG?

 

Request for proposal