Artificial Intelligence (AI) is an increasingly ubiquitous part of the everyday lives of Australians that is transforming the way we live and work.1
The benefit and promise of AI for society and business are undeniable. AI is helping people make better predictions and informed decisions, enabling innovation and productivity gains. It is helping identify credit card fraud, diagnose diseases and facilitate the global fight against COVID-19.
The risk and challenges that AI poses, including codifying and reinforcing unfair biases and infringing on human rights such as privacy, are also undeniable.
Without public confidence that AI is being developed and used in an ethical and trustworthy manner, it will not be trusted and its full potential will not be realised. Are we capable of extending our trust to AI?
Presented in conjunction with The University of Queensland.
Navigate between the chapters below for topic-focused insights and data.
Our findings provide important and timely research insights into the public’s trust and attitudes towards AI and lay out a pathway for strengthening trust and acceptance of AI systems. We summarise the key findings. In the conclusion to the report, we draw out the implications of these insights for government, business and NGOs.
Trust is central to the acceptance of AI, and is influenced by four key drivers
Our results confirm that trust strongly influences AI acceptance. Of these drivers, the perceived adequacy of current regulations and laws is clearly the strongest. This demonstrates the importance of developing adequate regulatory and legal mechanisms that people believe protect them from the risks associated with AI use.
Australians have low trust in AI systems but generally ‘accept’ or ‘tolerate’ AI
Trust in AI systems is low in Australia, with only one in three Australians reporting that they are willing to trust AI systems. Almost half of the public (45%) are unwilling to share their information or data with an AI system and two in five (40%) are unwilling to trust the output of an AI system (e.g. a recommendation or decision). Many Australians are not convinced AI systems are trustworthy. However, they are more likely to perceive AI systems as competent than designed to operate with integrity and humanity.
The public has very clear expectations of the principles and practices they expect organisations deploying AI systems to uphold in order to be trusted. Most Australians (more than 70%) would be more willing to use AI systems if assurance mechanisms were in place, such as independent AI ethics reviews, AI ethics certifications, national standards for transparency, and AI codes of conduct.
Australians expect AI to be regulated and carefully managed
The large majority of Australians (96%) expect AI to be regulated, but most either disagree (45%) or are ambivalent (20%) that current regulations and laws are sufficient to make the use of AI safe. This powerfully highlights the importance of strengthening the regulatory and legal framework governing AI.
Australians feel comfortable with some but not all uses of AI at work
Most Australians (59%) disagree that AI will create more jobs than it will eliminate. They clearly expect advanced notice (93%), retraining opportunities (92%) and redeployment (89%) in the event their jobs are automated.
Australians want to know more about AI but have low awareness and understanding of it and its uses
Only 51 percent of the public have heard about AI in the past year, and most (61%) report a low understanding of AI, including how and when it is used in everyday applications. For example, even though 78 percent of Australians report using social media, 59 percent of them are unaware that social media apps use AI. The good news is that most Australians (86%) want to know more about AI. Considered together, the results suggest there is both a need for, and an appetite for, a public AI literacy program.
Our sample of 2,575 respondents was nationally representative on gender, age and state matched against Australian Bureau of Statistics (ABS) data, and broadly representative on income and downloading of the COVIDSafe app.
We collected data between 24 June and 21 July 2020.
This national survey is designed to understand and quantify Australians’ trust in and support of AI, and to benchmark these attitudes over time.
AI is used in a range of applications, such as calculating the best travel route to take in real-time, predicting what customers will buy, identifying credit card fraud, helping diagnose disease, identifying people from photos, and enabling self-driving vehicles.
By taking the first deep dive into the question of trust, this research provides a comprehensive and nuanced understanding of Australians’ overall trust in AI systems, as well as in specific AI applications in the domains of healthcare, policing, HR and financial investment. These domains represent common applications of AI that relate to citizens, employees and consumers
This research provides insights into the key drivers of trust, community expectations and confidence in the regulation of AI, expectations of the management of societal challenges associated with AI, as well as Australians’ current understanding and awareness of AI. Importantly, the findings provide a clear understanding of the practices and principles Australians expect organisations to use to responsibly develop and ethically deploy AI in society and the workplace.
These insights are relevant for building and maintaining the trust and acceptance of AI system by the Australian public, as well as informing policy and practice across government, business and non-profits.
What is AI?
Artificial Intelligence (AI) refers to computer systems that can perform tasks or make predictions, recommendations or decisions that usually require human intelligence. AI systems can perform these tasks and make these decisions based on objectives set by humans but without explicit human instructions.
Chapter one summary: Do Australians trust AI?
We asked Australians how much they trust, accept and support AI in general, as well as specific applications of AI.
Australians are ambivalent about trusting AI although they generally accept or tolerate AI. Public support for the development and use of AI depends on its purpose, with most support for its use in healthcare (e.g. disease diagnostic) and least support of its use in human resources.
Chapter two summary: Who do Australians trust to develop and regulate AI?
We asked how much confidence Australians have in different entities to develop and use AI, as well as regulate and govern AI.
We found that the public are most confident in Australian research and defence organisations to develop and use, and regulate and govern AI, and least confident in commercial organisations.
Chapter three summary: What expectations do Australians have about AI regulation?
We explored the expectations the public have around AI development and regulation, including the extent to which they think regulation is necessary, who should regulate, and whether current regulations and institutional safeguards are sufficient.
We found that regulation is clearly required and Australians expect external, independent oversight, yet few perceive current regulations to be sufficient.
Chapter four summary: What principles are important for Australians to trust AI systems?
Eight AI design and governance principles and practices are highly important for trust.
These principles are: technical robustness and safety; data privacy, security and governance; human agency and oversight; transparency and explainability; fairness and non-discrimination; accountability and contestability; AI literacy and risk and impact mitigation.
Chapter five summary: How do Australians feel about AI at work?
AI is becoming more common in the workplace yet Australians vary in their use of AI at work.
Most report that little or none of their work involves AI, but given many Australians have a low understanding and awareness of AI use, these figures may reflect that AI is not being used at work, or that people are not aware of its use at work. Australians are generally comfortable with AI use at work when it is not focused on them. Most Australians don’t believe AI will create more jobs than it will eliminate, and if jobs are automated, workers will expect support such as being redeployed or retrained.
Chapter six summary: How do Australians view key AI challenges?
The pervasive use of AI in society is leading to a range of challenges.
This survey found that Australians expect all AI challenges to be carefully managed, with a particular focus on how data will impact people in the near future.
Chapter seven summary: How well do Australians understand AI?
To identify how well Australians understand AI, we asked about AI awareness, knowledge about AI and interest to learn more.
In general, Australians have low awareness and understanding of AI and low knowledge of its use in common everyday applications, yet most want to know more about AI. Results in Chapter Eight show that awareness and understanding of AI influences trust in AI systems.
Chapter eight summary: What are the key drivers of trust and acceptance of AI?
To identify the most important drivers of trust and acceptance of AI systems, we used an advanced statistical technique called path analysis.
These findings revealed that trust is central in AI acceptance, and that the strongest driver of trust is believing that current regulations and laws are sufficient to ensure AI use is safe.
University of Queensland Researchers
Professor Nicole Gillespie, Dr Steve Lockey and Dr Caitlin Curtis
James Mabbott, Richard Boele, Ali Akbari, Rossana Bianchi and Rita Fentener van Vlissingen
We are grateful for the insightful input, expertise and feedback provided by members
of the Trust, Ethics and Governance Alliance at The University of Queensland, particularly Dr Ida Someh, Associate Professor Martin Edwards and Professor Matthew Hornsey, KPMG Partners Phillip Sands, Scott Guse, Joel Di Chiara and Leo Zhang, as well as domain expert input by Greg Dober, Mike Richmond and Professor Monica Janda.
This KPMG and University of Queensland report provides an integrative model for organisations looking to design and deploy trustworthy AI systems.
This report provides a model for organisations looking to achieve trustworthy AI systems.
KPMG and the University of Queensland take a deep dive on public trust in AI by examining citizens’ perspectives across five nation states
A deep dive on public trust in AI examining perspectives across five nations.
1. Makridakis, S. (2017). The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46-60.
2. Schwab, K. (2015, December 12).The Fourth Industrial Revolution: What it means and how to respond. Foreign Affairs. Retrieved from https://www.foreignaffairs.com/
3. OECD (2019), Artificial Intelligence in Society, OECD Publishing, Paris. https://doi.org/10.1787/eedfee77-en