Public trust in Artificial Intelligence (AI) is vital for the continual acceptance of the technologies that are transforming the way we live and work.

The benefit and promise of AI for society and business are undeniable. AI is helping people make better predictions and informed decisions, enabling innovation and productivity gains. It is helping identify credit card fraud, diagnose diseases and facilitate the global fight against COVID-19.

The risks and challenges that AI poses, including codifying and reinforcing unfair biases and infringing on human rights such as privacy, are also undeniable. These issues are causing public concern and raising questions in Australia and around the world about the trustworthiness and regulation of AI systems.

If AI systems do not prove to be worthy of trust, their widespread acceptance and adoption will be hindered, and the potentially vast societal and economic benefits will not be fully realised. Despite the central importance of trust, to date little is known about citizens’ trust in AI or what influences it across countries.

In 2020, we conducted the first deep dive survey examining Australians’ trust in AI systems. This report extends this research on trust in AI by providing a comprehensive understanding of citizen perspectives across five nation states: United States, Canada, Germany, United Kingdom and Australia.

Presented in conjunction with The University of Queensland.

University of Queensland

Report highlights

Key findings from the report include:
  • Citizens have low trust in AI systems but generally ‘accept’ or ‘tolerate’ AI. 
  • Citizens believe current safeguards are insufficient and expect AI to be regulated.
  • The more people believe the impact of AI is uncertain, the less they trust AI systems.
  • Citizens’ trust and support of AI depends on the purpose of the AI system.
  • Citizens feel comfortable with some but not all uses of AI at work.
  • Citizens want to know more about AI but currently have low awareness and understanding of AI and its uses.
  • Confidence in entities to develop, use and regulate AI varies across countries.
  • A pathway to strengthen public trust in AI.


Download the full report  >




Given the rapid investment and deployment of AI, it will be important to regularly re-examine public trust and expectations of AI systems as they evolve over time and expand the countries surveyed beyond western nations to ensure AI use is aligned with and meeting evolving societal expectations.



Credits

University of Queensland Researchers
Professor Nicole Gillespie, Dr Steve Lockey and Dr Caitlin Curtis

KPMG Advisers
James Mabbott, Richard Boele, Ali Akbari, Rossana Bianchi and Rita Fentener van Vlissingen

Acknowledgements
We are grateful for the insightful input, expertise and feedback provided by members of the Trust, Ethics and Governance Alliance at The University of Queensland, particularly Dr Ida Someh, Associate Professor Martin Edwards and Professor Matthew Hornsey, KPMG Partners Phillip Sands, Scott Guse, Joel Di Chiara and Leo Zhang, as well as domain expert input by Greg Dober, Mike Richmond and Professor Monica Janda.


Key contacts

Get in touch with KPMG for more insights on artificial intelligence, or contact the team below: 

Related insights

Insights and services related to artificial intelligence, data and ethics