On February 19, 2020, the Information Commissioner’s Office (“ICO”) launched a consultation on its draft AI auditing framework guidance for organizations (“Guidance”). The Guidance is open for consultation until April 1, 2020 and responses can be submitted via the ICO’s online survey.

This is the first piece of guidance published by the ICO that has a broad focus on the management of several different risks arising from AI systems, as well as governance and accountability measures. The Guidance contains advice on how to understand data protection law in relation to artificial intelligence (“AI”) and recommendations for organizational and technical measures to mitigate the risks AI poses to individuals. It also provides a methodology to audit AI applications and ensure they process personal data fairly.

The ICO notes that the Guidance aims to inform organizations about what it thinks constitutes best practice for data protection-compliant AI, and that the Guidance has two distinct outputs:

  • auditing tools and procedures which will be used by the ICO’s investigation and assurance teams when assessing the compliance of organizations using AI; and
  • the provision of indicative risk and control tables at the end of each section to help organizations audit the compliance of their own AI systems.

The Guidance is targeted at both technology specialists developing AI systems and risk specialists whose organizations use AI systems. The purpose of the Guidance is to assist such specialists in assessing the risks to rights and freedoms that AI can cause, and the appropriate measures an organization can implement to mitigate them.

The ICO is seeking feedback from those with a compliance focus (e.g., data protection officers, general counsel, risk managers etc.), as well as technology specialists (e.g., machine learning experts and data scientists, software developers and engineers, cybersecurity and IT risk managers etc.).

The Guidance is divided into four parts that correspond to different data protection principles and rights:

  • Part one addresses accountability and governance in AI, including data protection impact assessments (“DPIA”) and controller / processor responsibilities;
  • Part two covers fair, lawful and transparent processing, including lawful bases, assessing and improving AI system performance and mitigating potential discrimination to ensure fair processing;
  • Part three addresses security and data minimization in AI systems; and
  • Part four covers how an organization can facilitate the exercise of individual rights in its AI systems, including rights related to solely automated decision-making.

Further information on what these sections of the Guidance cover is provided below.

What Are the Accountability and Governance Implications of AI?

  • The Guidance states that organizations are legally required to complete a DPIA if they use AI systems that process personal data. The Guidance outlines what information the DPIA should cover, including, for example, an explanation of any relevant variation or margins of error in the performance of the system which may affect the fairness of the personal data processing.
  • The ICO notes that it can be difficult to describe the processing activity of a complex AI system. As such, it may be appropriate for an organization to maintain two versions of an assessment, with one version presenting a thorough technical description for specialist audiences, and the other containing a high-level description of the processing to explain how the personal data inputs relate to the outputs affecting individuals.
  • The Guidance outlines certain issues the DPIA should address, including how a DPIA should (1) assess necessity and proportionality of an AI system; (2) identify and assess risks; and (3) identify mitigating measures (e.g., data minimization or providing opportunities for individuals to opt out of the processing).
  • The ICO recognizes that it is unrealistic to adopt a ‘zero tolerance’ approach to the risks to rights and freedoms. Instead, organizations should ensure that these risks are identified, managed and mitigated. As such, AI systems will inevitably involve trade-offs between privacy and other competing rights and interests.
  • A short overview is provided of some of the most notable trade-offs that organizations are likely to face when designing or procuring AI systems, including privacy and statistical accuracy, statistical accuracy and discrimination, explainability and statistical accuracy, and explainability, exposure of personal data, and commercial security. The Guidance outlines how these trade-offs can be managed, and provides worked examples to assist organizations in assessing trade-offs.

What Do Organizations Need to Do to Ensure Lawfulness, Fairness, and Transparency in AI Systems?

  • The Guidance notes that when determining purpose and lawful basis, organizations should separate the development or training of AI systems from their deployment. This is because these are distinct and separate purposes, with different circumstances and risks. Accordingly, there may be different lawful bases for an organization’s AI development and deployment. The Guidance outlines some AI-related considerations (including examples) for each of the General Data Protection Regulation’s (“GDPR”) lawful bases to assist in determining whether an organization can, for example, rely on consent or performance of a contract, etc.
  • The Guidance explains the controls an organization can implement to ensure that its AI systems are sufficiently statistically accurate to ensure the personal data processing they undertake complies with the fairness principle. For example, in order to avoid personal data being misinterpreted as factual, organizations should ensure that their records indicate that they have made statistically informed inferences rather than relied on facts.
  • The Guidance outlines technical approaches to mitigate discrimination risk in machine learning models. In cases of imbalanced training data, it may be possible to balance the processing by adding or removing data about under/overrepresented subsets of the population (e.g., adding more data points on loan applications from women). Alternatively, an organization could train separate models (e.g., one for men and another for women), and design them to perform on each sub-group (although creating different models for different protected classes could itself be a violation of non-discrimination law). In cases where the training data reflects past discrimination, an organization could either modify the data, change the learning process or modify the model after training.

How Should Organizations Assess Security and Data Minimization in AI?

  • The Guidance recognizes that there is no “one-size-fits-all” approach to security. Appropriate security measures will depend on the level and type of risks that arise from specific processing activities.
  • Hypothetical scenarios are provided to outline some of the known security risks and challenges that AI can exacerbate. These case studies include losing track of training data, and security risks introduced by externally maintained software used to build AI systems.
  • The Guidance notes that certain types of privacy attacks can reveal the personal data of the individuals whose data was used to train an AI system. Specific attention is focused on two such privacy attacks, ‘model inversion’ and ‘membership inference,’ with the guidance providing examples of what these attacks are and how they work.
  • Certain techniques are outlined for enhancing privacy which can be used to minimize the personal data being processed at the training phase, including perturbation, or adding ‘noise,’ and federated learning. The Guidance also outlines certain techniques for enhancing privacy which can be used to minimize the personal data being processed at the inference stage, including, converting personal data into less ‘human readable’ formats, making inferences locally, and privacy-preserving query approaches.

How Do Organizations Enable Individual Rights in AI systems?

  • The Guidance provides an overview and examples of how data subject rights may apply with respect to personal data processed in AI systems.
  • The Guidance recognizes that rights relating to automated decisions can be a particular issue for AI systems. By way of example, those based on machine learning may be more complex and present more challenges for meaningful human review. Machine learning systems make predictions or classifications about people based on data patterns. Even when they are highly statistically accurate, they will occasionally reach the wrong decision in individual cases. Errors may not be easy for a human reviewer to identify, understand or fix. While not every challenge from an individual will result in the automated decision being overturned, organizations should expect that many could be. The Guidance notes that there are two particular reasons why this may be the case in machine learning systems: (1) the individual is an ‘outlier,’ or (2) assumptions in the AI design can be challenged.
  • The Guidance outlines certain steps that organizations can take to fulfill rights related to automated decision making, including designing and delivering appropriate training and support for human reviewers.