On October 4, 2022, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth published a white paper outlining 10 key recommendations for regulating artificial intelligence (“AI”) in Brazil (the “White Paper”). CIPL prepared the White Paper to assist the special committee of legal experts established by Federal Senate of Brazil (the “Senate Committee”) as it works towards an AI framework in Brazil.
CIPL recommends adopting a layered framework for AI that (1) enables an agile, technology-agnostic and future-proof regime that builds on existing legal standards and frameworks; (2) is risk-based and grounded on the holistic impact assessment of AI applications; (3) fosters innovation through organizational accountability; and (4) enables consistent and modern approaches to regulatory oversight.
To position Brazil as a competitive and responsible leader in AI innovation, Brazil’s AI framework should:
- Be designed in a way that enables it to evolve and be flexible to changes in the AI ecosystem. Brazil can achieve a technology-agnostic and future-proof regime by regulating only key AI issues and risks and enabling responsible AI through a range of accountability tools.
- Build on existing legal frameworks and avoid duplication of, and conflict with, existing requirements. Certain aspects of AI are already regulated by Brazil’s data protection law (Lei Geral de Proteção de Dados (“LGPD”)), the Civil Framework for the Internet, the Consumer Protection Code and the Access to Information Law.
- Adopt a principles and outcomes-based regulatory approach that enables organizational accountability. Accountability requirements facilitate responsible AI innovation, trust in the AI ecosystem and the responsible collection and use of data for AI training, development and deployment.
- Incorporate a risk-based approach. The focus of such an approach assesses the risk, benefits and reticence risk of AI applications and enables the range of AI innovations while ensuring the responsible development and deployment of AI.
- Incentivize the development and implementation of accountable AI practices. Organizations should be incentivized to adopt practices that enable their AI innovations in a responsible way while also ensuring compliance with any AI regime and appropriate protections for individuals.
- Be enforced by existing regulators in a collaborative way. Oversight and enforcement of Brazil’s AI framework should be performed by existing regulators, including Brazil’s National Data Protection Authority (Agência Nacional de Proteção de Dados (“ANPD”)), and such regulators should work together through a regulatory forum to ensure consistent interpretation of AI rules.
- Integrate co-regulatory mechanisms to enable responsible AI innovation. AI assurance frameworks, certifications, codes of conduct and standards can serve as important mechanisms to enable organizational accountability and appropriate regulatory oversight.
- Encourage new and agile approaches to regulatory oversight. Constructive engagement with industry and government bodies as well as modern and agile oversight tools, such as regulatory sandboxes, policy prototyping projects and data review boards should all form part of Brazil’s AI regulatory toolbox.
- Address issues of liability in the longer term. In the immediate term, there should be a concerted effort to monitor market developments and for regulators to work with legal experts, practitioners and representatives of AI developers, suppliers and users to engage in thoughtful discussions on liability in the AI context before settling on any particular legal position.
- Be formulated through a multi-stakeholder process. Creating a successful AI framework in Brazil will require consideration of a wide variety of stakeholder perspectives. Such a process has proven successful in the development of other legal frameworks in Brazil, including the LGPD and the Civil Framework for the Internet.
For more information about the above recommendations, read the White Paper.