On March 12, 2020, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP submitted formal comments to the Office of the Privacy Commissioner of Canada (“OPC”) in response to its proposals for ensuring appropriate regulation of artificial intelligence (“AI”).
The OPC is currently engaged in policy analysis relating to the legislative reform of the Personal Information Protection and Electronic Documents Act (“PIPEDA”). Part of this analysis involves examining PIPEDA’s application to AI systems and the OPC has put forward 11 different proposals where it believes PIPEDA could be enhanced.
CIPL agrees that the issues raised by the OPC in the consultation are significant and believes that overcoming the challenges will require creativity, flexibility, agility, cooperation, and continued vigilance from both organizations and regulators. Applying existing accountability tools to AI applications forms a key part of this solution.
In its comments, CIPL recommends that the OPC:
- Maintain its principle of technological neutrality, and regulate based on the impact of technology uses rather than on whether or not a use of data falls within a specific definition of AI;
- Focus on a risk-based approach rather than a strictly rights-based approach in revising PIPEDA. This would focus attention on uses of data that pose the greatest risks for individuals and society, and provide flexibility to consider privacy within a broader scope of rights and interests;
- Deploy a risk-based approach to determine the parameters and conditions for when a right to object is appropriate, if such a right is ultimately incorporated into PIPEDA;
- Design transparency with the aim of providing individuals access to information such as the types of data that go into AI and automated decision-making models, information on how to correct false or outdated information, and how to remedy erroneous decisions. Transparency, in the AI context, should not require the disclosure of algorithms to individuals;
- Incorporate Privacy by Design and Human Rights by Design as legal requirements, such that organizations will be required to develop processes that promote thoughtful innovation throughout the product or application lifecycle. Such requirements should, however, be in line with general principles of accountability rather than rigid processes, as this will allow organizations to find innovative ways to foster and implement responsible AI;
- Adopt a risk-based approach to purpose specification and data minimization principles and consider the context in which data is collected and processed to enable realistic and effective compliance with these principles without compromising the benefits of AI;
- Include alternative grounds for processing, including legitimate interest, and solutions to protect privacy when obtaining meaningful consent is not practicable;
- Create a broad exception for de-identified information from all relevant statutory requirements as de-identification can facilitate responsible use of personal information to help train and deploy new and beneficial technologies while also upholding individual privacy; and,
- Mandate accountability as a governance model for enabling trust in AI development and use.
To read more about these recommendations, see the full set of comments.