On February 19, 2020, the European Commission (“the Commission”) published a White Paper entitled “a European Approach to Excellence and Trust” on artificial intelligence (“AI”). This followed an announcement in November 2019, from the Commission’s current President, Ursula von der Leyen, that she intended to propose rules to regulate AI within the first 100 days of her Presidency, which commenced on December 1, 2019. This White Paper was published alongside the Commission’s data and digital strategies for Europe.
The Commission also published an accompanying report on the safety and liability implications of AI, the Internet of Things and robotics, which was delivered to the European Parliament, the Council, and the European Economic and Social Committee. It identifies the ways in which existing legislation may need to be amended to account for the specific risks presented by emerging technologies, such as the creation of increasingly complex supply chains.
Some of the key takeaways from the White Paper include:
- Policy Framework. The White Paper sets out a policy framework with measures designed to bring together efforts at the regional, national and international level. It discusses the Commission’s proposed steps toward building an “ecosystem of excellence” to support the development and adoption of AI across the EU economy, as well as in the field of public administration. The White Paper notes that a “clear European regulatory framework would build trust among consumers and businesses in AI, and therefore speed up the uptake of the technology.” For example, some envisioned steps include focusing on working with Member States to secure EU-level funding, ensuring that small and medium enterprises (“SMEs”) have access to AI, and encouraging public-private partnerships.
- Key Risks of AI. The White Paper highlights some of the key risks presented by AI, including risks to fundamental rights such as privacy, human dignity, non-discrimination and the right to a fair trial. Part of this issue stems from what is known as the “black box effect”—the opacity of certain AI algorithms that prevents the reasoning underlying an AI system’s decision-making from being verifiable. Further risks highlighted are risks to the functioning of the liability regime, where flaws in AI embedded in products and services cause real world issues, the root of which cannot be traced because of the opacity of the AI system. When the root cause of such issues is unclear, this creates legal uncertainty, particularly regarding the allocation of responsibility in relation to malfunctioning systems.
- Existing Legislation. The White Paper highlights that there is already an extensive body of legislation in place governing certain aspects and uses of AI, both on a sectoral and national level. This includes data-specific legislation such as the General Data Protection Regulation 2018 (“GDPR”), as well as numerous pieces of legislation relating to equality and consumer protection. However, it is noted that effective application of existing legislation can be hindered by the lack of transparency around AI systems. Therefore, the Commission considers that it may be necessary to adjust or provide clarification around certain provisions. The Commission also highlights limitations regarding the scope of existing legislation, as well as the challenge of regulating AI-enabled products that may come to the market functioning in one way but adapt through machine learning to perform new tasks.
- Future Legislative Approach. With respect to the future, the Commission proposes taking measures to deal with the gaps in existing legislation, avoiding overly prescriptive regulation by adopting a risk-based approach. This would involve identifying “high risk” AI systems. The first relevant criterion for this categorization will be whether significant risks can be expected to arise given the nature of the sector in question (for example in healthcare or transportation). The Commission suggests that relevant sectors specifically be identified and addressed by any new regulatory framework. The second relevant criterion is whether the intended use of the AI system means that significant risks are likely to arise. This could be determined by looking at the potential impact on affected parties, such as where there is a risk of injury, death, or significant material or immaterial damage. These two criteria are proposed to be assessed cumulatively, and in theory, the mandatory requirements of any new regulatory framework would be directed at those systems that are identified as high risk. There may be instances where AI systems that do not fulfill these criteria are nonetheless considered high risk, such as where they may be used for intrusive surveillance technologies. The Commission also suggests the creation of a voluntary labelling scheme for AI systems not considered high risk, where operators make themselves subject to the mandatory requirements discussed below in order to achieve a quality label in relation to their AI applications and increase trust in their use of AI.
- Examples of Mandatory Legal Requirements. The types of mandatory legal requirements that the Commission proposes are:
- Providing quality training data, for example, ensuring that AI systems are trained on high quality data to ensure that rights are fundamentally protected during the training stage, not just during deployment, and that bias or discrimination in the AI system is avoided.
- Keeping records and data, particularly records of the programming of an algorithm, so that problematic or unanticipated decisions made by AI can be traced back to their source.
- Clear information provisions should be provided regarding an AI system’s capabilities and limitations, including the conditions under which AI can be expected to function as intended. Citizens also should be informed when they are interacting with an AI system.
- Robustness and accuracy to ensure the risks of a proposed system are considered during development, and all reasonable measures are taken to minimise the risk of harm. This involves creating AI systems that are resilient to attacks, as well as attempts to manipulate the underlying data or algorithms.
- Human oversight, in order to ensure that human autonomy is not undermined. For example, an AI system’s output should not be immediately implemented without being validated by a human, or human intervention should at least be ensured following such implementation.
- Specific requirements for remote biometric identification—a technology that should only be used where such use is duly justified, proportionate and subject to adequate safeguards.
- Allocation of Responsibility. When deciding how responsibility for such measures should be allocated between different actors in the AI supply chain, the Commission suggests that responsibility should fall on those best equipped to address the risk in question. The Commission proposes that responsibility for use of AI should apply beyond EU borders to all relevant economic operators providing AI-enabled products or services in the EU, whether established there or not.
- Future Developments. The Commission noted that given the nature of AI, any regulatory regime would need to be adaptive, stating, “[g]iven how fast AI is evolving, the regulatory framework must leave room to cater [to] further developments. Any changes [to existing legislation] should be limited to clearly identified problems for which feasible solutions exist.”
Comments are invited on the White Paper and can be submitted until May 19, 2020.