In jurisdictions worldwide, new policy initiatives and regulations concerning the governance of data and AI signal the end of self-regulation and the rise of new oversight. As the regulatory environment continues to evolve at pace, leading organizations are addressing AI ethics and governance proactively rather than waiting for requirements to be enforced upon them.
Through the course of 2020 we’ve seen AI deployed to help organizations better anticipate COVID-19 impact across the globe and industry sectors, so that they can respond to it with greater resiliency. In 2020, we have also seen revitalized focus on the role technology and AI plays across the environmental, social, and governance (ESG) landscape. This includes AI use cases and applications in healthcare, education, law enforcement, and financial services among others. Relative expansion of AI-driven use cases has highlighted both the benefits and the potential risks of AI — notably the issue of trust in technology. While trust has long been a defining factor in an organization’s success or failure, the risk of AI now goes beyond reputation and customer satisfaction — it is playing an outsized role in shaping individuals, future well-being even as few inside or outside the enterprise fully understand how it works. This whitepaper examines today’s regulatory themes around AI governance from across the globe and provides organizations with a series of recommendations on how to establish trust in AI.
For AI solutions to be transformative, trust is imperative. This trust rests on four main anchors: integrity, explainability, fairness, and resilience. These four guideposts help organizations ensure the proper governance of algorithms.