On April 8, 2019, the European Commission High-Level Expert Group (the “HLEG”) on Artificial Intelligence released the final version of its Ethics Guidelines for Trustworthy AI (the “Guidelines”). The Guidelines’ release follows a public consultation process in which the HLEG received over 500 comments on its initial draft version. The Centre for Information Policy Leadership at Hunton Andrews Kurth LLP contributed its own comments during this process.
The Guidelines outline a framework for achieving trustworthy AI and offer guidance on two of its fundamental components: (1) that AI should be ethical and (2) that it should be robust, both from a technical and societal perspective. The Guidelines intend to go beyond a list of principles and operationalize the requirements to realize trustworthy AI.
The Guidelines consist of three chapters:
- Chapter I outlines ethical principles and values that must be respected in the development, deployment and use of AI systems.
- Chapter II details seven requirements that AI should meet to ensure it is trustworthy (i.e., human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; environmental and societal well-being; and accountability).
- Chapter III sets out a trustworthy AI assessment list reflecting Chapter II’s requirements. The list is non-exhaustive and intended to apply in a flexible manner depending on the AI use at hand.
The HLEG plans to launch a pilot test phase based on its trustworthy AI assessment list. The pilot will involve a range of stakeholders, including industry, research institutes and public authorities. Interested organizations can sign up to the European AI Alliance to be notified when the pilot commences. Early next year, following the pilot, the HLEG will review any feedback it receives and revise the list as appropriate.
Read the Guidelines in detail and view the full trustworthy AI assessment list.