On March 22, 2021, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth published its paper on delivering a risk-based approach to regulating artificial intelligence (the “Paper”), with the intention of informing current EU discussions on the development of rules to regulate AI.

CIPL partnered with key EU experts and leaders in AI in drafting the Paper, translating best practices and emerging policy trends into actionable recommendations for effective AI regulation.

In the Paper, CIPL recommends a risk-based approach to regulating AI applications comprised of (1) a regulatory framework focusing only on AI applications that are “high risk”; (2) a risk-based organizational accountability framework that calibrates AI requirements and compliance to the specific risks at hand; and (3) smart and risk-based oversight.

 Specifically, CIPL recommends:

  • Adoption of an easy-to-use framework for identifying high-risk AI applications, involving the use of impact assessments designed to assess the likelihood, severity and scale of the impact of the AI use;
  • Provision of criteria and guardrails for determining high-risk AI applications;
  • Consideration of the benefits of an AI application as part of a risk assessment;
  • Creation of an “AI innovation board” to provide additional guidance and assist organizations in identifying high-risk AI;
  • That illustrations of high-risk AI applications in the regulation or regulatory guidance be treated as rebuttable presumptions;
  • Performance of pre-screening or triage assessment prior to a full-scale impact assessment;
  • Explicit acknowledgment that AI uses with no or low risk are outside the scope of the AI regulation; and
  • Avoidance of sector-based classifications of AI as high-risk.

With regard to AI systems that do present a high risk, CIPL recommends the use of principle and outcome-based rules rather than prescriptive requirements, in order to avoid regulation quickly becoming outdated. CIPL proposes providing incentives and rewards for achieving desired outcomes and including an explicit accountability obligation in any regulation, as well as calibrating compliance with the regulation’s requirements on the outcomes of a risk assessment (i.e., requiring more sophisticated implementation of compliance measures for higher risk systems).

 Any regulation should, in CIPL’s view, allow for continuous improvement, encouraging organizations to identify risks and address them throughout the lifecycle of an AI application in an agile and iterative manner. Prior consultation with regulators or prior conformity assessments should be required only in relation to high-risk AI uses. CIPL further highlights the benefit of using accountability frameworks to address the challenges raised by the use of AI, recommending that any regulatory framework encourage and incentivize accountability measures, such as by linking accountability to external certification, allowing broader use of data in AI for socially beneficial projects and recognizing demonstrated AI accountability as a mitigating or liability reducing factor in the enforcement context.

CIPL sets out the essential features that an effective oversight framework should include. These are:

  • Novel and agile regulatory oversight, based on the current ecosystem of sectoral and national regulators rather than creation of an additional layer of AI-specific agencies;
  • Cooperation through an AI regulatory hub composed of AI experts from different regulators to enable agile cooperation “on demand” and drive consistent application;
  • Maintenance of the competence of data protection authorities and the European Data Protection Board (“EDPB”) in cases where an AI application involves the processing of personal data;
  • Risk-based oversight and enforcement, focusing on areas of high-risk AI and recognizing compliance as a dynamic process and journey, allowing bona fide trial and honest error;
  • Enforcement as a last resort and prioritization of engagement, collaboration, thought-leadership, guidance and other proactive measures to drive better compliance with AI rules;
  • Creation of a consistent EU-level scheme of voluntary codes of conduct, standards and certifications to complement the risk-based approach to AI oversight; and
  • Use of innovative regulatory tools based on experimentation, such as regulatory sandboxes.