Artificial intelligence and the exponential growth of personal data is forcing governments and businesses to look again at their approaches to privacy. As AI technology, privacy legislation and consumer sentiment evolves, organisations will need to rethink transparency and decision making for the digital age.
Privacy and data usage concerns continue to be the focal point of public discourse on Artificial Intelligence (AI). Privacy laws, founded on the assumption that human beings would be responsible for the collection of data and subsequent decision-making, are being challenged by rapidly evolving AI landscape. The use of AI introduces unique and unprecedented challenges to information privacy as new tools and technologies have the capacity to collect, arrange and analyse and make decisions based on massive volumes of data, at speeds previously unfathomable to law and policy-makers.
The practice of ‘screen-scraping’ is one example where we see that tension between existing privacy laws and the emergence of AI. Screen-scraping or web-scraping involves the ‘automated, programmatic use of a website, impersonating a web browser, to extract data or perform actions that users would usually perform manually on the website’. Given dramatic increases in the volume and variety of Big Data available on the web, web-scraping and web-crawling technologies present considerable opportunities for commercial entities, governments and interested individuals to find, collect and make sense of and commercialise large amounts of information.
However, as with the Cambridge Analytica scandal, there is an obvious and growing concern that individuals aren’t really consenting to this type of use of their personal information. The technology also makes it so easy for so much data to be collected and used in ways that are inconsistent with community expectations.
There is also increasing concern that scraper and crawling activities may mutate into more sinister forms of online surveillance, especially where data is collected without the knowledge or consent of the individuals to whom it relates. This threat exists even were data has been de-identified, or where users are operating under a pseudonym. This is due to the fact that bots, spiders and crawlers have a capacity to search in a constant and systematic manner, and are therefore more capable of identifying individuals.
Data that may be de-identified in isolation may be capable of re-identification when arranged and analysed as part of a broader data set. Other concerns include the commercial exploitation of personal data communicated on forums, blogs and social media sites highlighting consumer preferences and practices; the emergences of a ‘free-riding’ culture whereby individuals jump start businesses using data collected by someone else.
Governments are increasingly aware that critical concerns around the use and management of personal data will need to be addressed if AI is to realise its potential in an ethical way consistent with prevailing social values. Even amidst great political uncertainty, data privacy is currently one of the few areas of true bipartisan alignment in US politics.
In this regard, Europe has led the way with the enactment of the General Data Protection Regulation 2016/679 (GDPR) which implements more stringent consent requirements for data collection, provides users with the right to be forgotten and strengthens supervision of organisations that gather, control and process data. Article 22 of GDPR allows individuals to choose not to be subject to automated individual decision-making, including profiling, that produces legal effects concerning data subjects. It also requires organisations using this type of technology to build in safeguards for those individuals impacted such as a right to challenge those decisions. Regulations such as this are designed to protect consumers and increase public trust an age of increasingly pervasive data practices. They seek to ensure that clear parameters are placed on the use of AI technology vis-à-vis personal and sensitive information.
However, as the application of AI becomes more commonplace in commercial contexts, it’s increasingly clear that the paradox of privacy in AI will pose a significant challenge for regulators, government and business alike. Indeed, because machine learning relies on the input of Big Data and the evolution of algorithms over time, the rules these systems are driven by will inevitably exceed the understanding of humans over time. This makes it increasingly difficult, if not impossible for organisations to explain the logic of the decisions made using their customers’ or citizens’ personal data. The very concept of AI, unchecked, could be argued to operate antithetical to the foundations of privacy law today.
In order to remain compliant with the spirit of new approaches to regulation such as GDPR and maintain the trust of their stakeholders, organisations will ultimately need to get granular on Big Data. This could mean being able to:
As the technology continues to be developed in order to increase capabilities as well as transparency, those using AI to power operations and services should consider the systems required to substantiate their data governance, management and decision-making processes.
On 7 February 2019 KPMG convened the inaugural Future AI Forum Australia. This initiative is designed to drive collaboration, accelerated thinking, genuine breakthroughs and ownership of solutions to these questions. The founding members will bring a holistic, community view to the issue, and will include leading academics, researchers, corporates, consumer groups, Federal government representatives, and relevant charities – all deeply involved in AI, its use, advantages and disadvantages.