Artificial Intelligence Regulation in Brazil

Read other chapters of the book online – Free

© Dr. John P. Byrne 2025

As AI moves to fundamentally and profoundly transform society, Brazil, as a voice from the Global South, contributes its perspectives to a human-centred, inclusive, development-oriented, responsible and ethical approach to AI, with the fundamental aim of improving people’s lives and bridging the digital divide.[1]

Brasil está combinando um método compreensivo de regular IA com um vondade de colocar ordem no area de intelligencia artificial. Tem, no mundo, trés posiçoẽs: os Que não quer regular o area (Argentina); os Que quer regular pensando em o segurança do povo (Europa); e os que quer um “light touch” para deixar o intelligencia artificial crescer no mercado (Estádos Unidos). Brasil está firmamente no segundo dessas trés.

Introduction

This chapter looks at the approach to Artificial Intelligence regulation Brazil. Unlike the position in the United States of America and in the EU there has been no adopted regulation on the matter in Brazil – at least not yet. Still, Brazil was very quickly out of the traps in terms of its review and response to Artificial Intelligence. This chapter will look at some of those early-phase responses and at the initial attempts to adopt legislation on the matter. It will then look at more recent developments and will show that, following a review, the primordial legislative position is now comparable to that found in the EU – an example of the so-called “Brussels Effect”[2]the effects of which can already been found  across several different regulatory areas.[3]

Background

Brazil is the fifth largest social media market in the world with social networking audiences anticipated to grow to 188 million by 2027.[4] Unsurprisingly, with the largest population in Latin America, at 216 million, Brazil is also the largest online audience on that continent. Brazil has recently asserted itself globally on important issues of international concern, which show its willingness to engage in transnational issues both of concern to it and to other countries: such issues include not just the protection of the Amazon rainforest which falls within its own jurisdiction, but, also, on the wider issue of leadership in the green economy: 92 per cent of Brazilian electricity comes from renewable sources[5], for example. On the issue of Artificial Intelligence, Brazil has actively engaged in international discussion pertaining to best practices in AI and has done so at a very early stage in the international dialogue on the subject.[6]

Artificial Intelligence Regulation in Brazil

The OECD records that Brazil demonstrated early-stage engagement and dialogue on the issue of Artificial Intelligence regulation as early as 2019. In that year the Federal Brazilian Government engaged in a consultation exercise which began in December 2019 and ended three months later inviting stakeholders across business, academia, civil society and the technical community to make contributions. Overall more than 500 participants were recorded as having taken part[7] with the objective of policy objective formulation and policy design. The Brazilian EBIA[8], or AI strategy, was set up in 2021, around the same time the European Commission published its proposal to regulate Artificial Intelligence in the European Union.[9]  The EBIA is based on five principles defined by the OECD for responsible management of AI systems, namely: (i) inclusive growth, sustainable development and well-being; (ii) human-centred values and fairness; (iii) transparency and explainability; (iv) robustness, security and safety; and (v) accountability. Brazil has made strides to improve its digital ecosystem to incentivise AI innovation balancing with regulatory measures.[10]

According to the OECD:

“The EBIA is part of a series of technology-related initiatives that have been implemented in Brazil during the past years, including the Brazilian Strategy for Digital Transformation (E-Digital), and General Data Protection Law (LGPD), amongst others. After two years of a public consultation that gathered around 1000 contributions, as well as a consultancy in AI hired by the Federal Government, the Brazilian AI Strategy was released. It was the first federal strategy which focused specifically on AI, and it intends to be the main framework directing other initiatives and strategies to be released in this topic in the near future.”[11]

The Brazilian Government has indicated its commitment to the “equitable and sustainable distribution of AI’s benefits across society”.[12] According to the OECD many AI initiatives were commenced and supported by the Brazilian government focusing on 9 axes as follows: 

  1. Legislation, regulation and ethical use;
  2. AI governance;
  3. International aspects;
  4. Qualifications for a digital future (Education);
  5. Workforce and training;
  6. Research, Development, Innovation and Entrepreneurship;
  7. Application in the productive sectors;
  8. Application in the public sector;
  9. Public security.

Overall the EBIA presents 73 strategic actions across broad-based areas: legislation, governance and international aspects, applying to a number of specifically identified areas: education, workforce and training, Research, Development, Industry and Entrepreneurship, application in the productive sectors, application in the public sector, and public security.[13]

Since the initiation of EBIA Brazil has established 6 applied centres for AI called CPA in the areas of smart cities, agriculture, industry 4.0 and health. There had been already pre-existing AI-focused establishments including the Centre for AI called C4AI and the Brazilian Association of Research and Industrial Innovation Network of Digitial Technologies and Innovation known as Embrapii’s Network. The objective of the network is to “leverage the productive capacity and competitiveness of Brazilian companies, encouraging the use and development of frontier technology in the industrial production process, based on AI”.[14] The EBIA also affords grants for startups, and establishes education programmes which aim to upskill the current workforce from elementary to postgraduate level.[15]

“With these initiatives underway, Brazil is strengthening its position in AI technology to face national challenges, such as strengthening the skills of its critical mass, in terms of human and physical capabilities, to fully and competitively embrace AI-enabled transformation.”[16]  

Since 2019 several bills have circulated in the Brazilian National Congress to regulate AI systems.[17] Bills nº 5.051/2019 and nº 872/2021 were laid before the Chamber of Deputies. Bill nº 21/2020 was laid before the Federal Senate. Brazil has a bicameral legislature, and the Bills may be laid before any of the houses. The house before which the Bill is laid is the initiating house, and the other works as the revising house.

At the beginning of 2022 the Chamber of Deputies approved and sent to the Senate Bill nº 21/2020 where that House decided to compose a Commission of Legal Experts to prepare an alternative Bill on Artificial Intelligence. The Commission is called CJSUBIA and comprises experts with recognised expertise in technology law and regulation. A series of public hearings were organised in April and May of 2022 which brought together more than 50 specialists from different groups including public authorities, the business sector, civil society, and the scientific-academic community. 

The hearings comprised of four main axes: (i) concepts, understanding, and classification of artificial intelligence; (ii) impacts of artificial intelligence; (iii) rights and duties; (iv) accountability, governance, and supervision.

In June 2022 an international seminar was organised to understand the international position on best practice outcomes for this area. A period of research collaboration followed which had regard to similar regulatory efforts in other jurisdictions. 

In December 2022, the Commission published a report which was 900 pages in length and which included a draft alternative bill – Bill No. 2.338/2023.[18] It was initiated by Senator Rodrigo Pacheco (PSD/MG) and lay for a time with the Temporary Internal Commission on Artificial Intelligence in Brazil[19] comprising a representative base of Senators. This Commission discussed drafts, and held 14 public hearings.[20] In December 2024 it approved a watered-down version of the Bill which was immediately passed by the Senate. In keeping with the original version of the Bill the legislation as passed by the Senate retains a risk-based regulatory model which imposes obligations on developers, distributors and applicators of high-risk systems. Risk-assessments that address biases and potential discrimination are required – in a move that models the EU AI Act. The regulation also classifies certain activity as high-risk: traffic control, student admissions, hiring and promoting employees and border and immigration control. The main features of the Bill that were dropped include the classification of certain algorithms for social media as high-risk.[21] The Bill is currently subject to review by the Chamber of Deputies.[22]

Movement Towards the European Union position

Tracing the development of the legislative initiatives in Brazil clearly shows the Brussels Effect in action. As originally conceived Bill No. 21 of 2020,[23] until the establishment of Bill No 2.338/2023, the primordial Bill in Brazil, was drafted without mention of a risk classification system for Artificial Intelligence systems, but it did mention a risk-based management, approach. In that Bill an AI system was defined interestingly as “a system based on a computable process that, from a set of goals defined by humans, can, through data and information processing, learn to perceive and interpret the external environment. As well as interact with it, make predictions, recommendations, categorisations, or decisions, and utilizing, but not limited to, techniques such as: machine learning systems, including supervised, unsupervised, and reinforcement learning; systems based on knowledge or logic; statistical approaches, Bayesian inference, research and optimization methods.”[24] Risk-based management was mentioned in Art 6, where it stated: “the development and usage of artificial intelligence systems shall consider the specific risks and definitions of the need to regulate artificial intelligence systems, and the respective degree of intervention shall always we proportional to the specific risks offered by each system and the probability of occurrence of these risks..” The foundations for Artificial Intelligence Regulation in Brazil were defined as including “the encouragement of self-regulation, through the adoption of codes of conduct and guides to good practices, observing… good global practices.”[25]

Subsequently, Bill No 2.338/2023 proposed[26] a different definition for Artificial Intelligence which reads[27] as follows:

“System of Artificial Intelligence: a computational system, with different degrees of autonomy, designed to interpret or achieve, data in line with objectives, utilised approaches based on machine learning and/or logic and representation of understanding, through input data coming from machines or humans, with the objective of producing predictions, recommendations, or decisions that could influence either the virtual or the real world.”

The other notable provisions of the draft Bill will now be considered. 

The Bill commences with Article 1 which states that the Bill establishes the general norms and national character for the dissemination, implementation and responsible use of systems of Artificial Intelligence (AI) in Brazil with the objective of protecting fundamental rights and to guarantee the implementation of secure and confidential systems that benefit mankind and democracy and the development of sciences and technology. 

Article 2 continues in a similar light and sets down fundamentals for the development of such systems:

            The importance of humankind

            Respect for human rights and the values of democracy

            The free development of personality

            Protection of the environment and sustainable development

            Equality, non-discrimination, plurality, and respect for the rights of workers

            The development of technology and innovation

            The defence of the consumer

            Privacy and the protection of data and informational self-determination

Access to information, and education, and a conscious awareness of the systems of artificial intelligence and their application

Article 3 asks that the development and implementation and use of systems of Artificial Intelligence are carried out in good-faith including within the principles of inclusivity, sustainable development and well-being. Self-determination and freedom of decision-making and schooling is also set down as well as non-discrimination, justice, equality and inclusion, transparency, intelligibility, robustness of systems and security of information.

Article 4 provides definitions. It defines Artificial Intelligence as follows:

“a computational system, with different degrees of autonomy, designed to infer how to achieve a given set of objectives, using approaches based on machine learning and/or logic and knowledge representation, through input data from machines or humans, with the aim of producing predictions, recommendations or decisions that may influence the virtual or real environment”

It distinguishes between a provider, on the one hand, and, an operator on the other:

“artificial intelligence system provider: a natural or legal person, of a public or private nature, who develops an artificial intelligence system, directly or on demand, with a view to placing it on the market or applying it in a service provided by it, under its own name or brand, for consideration or free of charge” 

“artificial intelligence system operator: a natural or legal person, of a public or private nature, who employs or uses, on his behalf or for his benefit, an artificial intelligence system, unless such system is used within the scope of a personal activity of a non-professional nature”. 

Article 5 gives rights to persons affected by systems of Artificial Intelligence including the rights: to be provided with information in respect of the persons interactions with the artificial intelligence system, to be provided with explanations about the decision, or recommendation, or forecast that has been taken  by the AI system, to contest decisions made by the AI system, to participate in humane decisions of the AI, to non-discrimination, to privacy and the protection of personal data.

Article 6 provides that the rights detailed in the Bill may be exercised before a competent administrative body, as well as before the court, either individually, or collectively, in accordance with extant legislation on individual, collective and “diffuse remedies”. 

Article 7 provides a right to persons affected by a system of Artificial Intelligence to receive a summary of their interaction with the system, clear and information in various respects including as to the following: description of the system, the type of decisions, recommendations and forecasts that the system makes and the consequences of the utilisation of the system to the person and categories of the personal data utilised in the context of the functioning of the AI system. 

Article 8 provides a right to a person affected by an AI system to solicit an explanation about the decision, a preview of the recommendation, with information in respect of the criteria and procedures utilised which should include the rationality and logic of the system, the significance and consequences forecast for the type of decision that impacts the individual concerned, the degree or level of contribution of the AI system in making the decision, the data processed and criteria for taking the decision, the mechanism for the effected person to contest the decision, and the possibility to solicitor human intervention within the terms of the law set down.

Article 9 extends expression of the right affected person to contest the decision and Article 10 refers to specific juridical decisions. Article 11 sets down that in cases where the decision to be taken have irreversible impact, or are difficult to reverse, or involve decisions that could generate risk to life or the physical integrity of an individual such decision should have the appropriate level of human input in respect of the final decision made.

Article 10 deals with the right to human review. It states:

“When the decision, prediction or recommendation of an artificial intelligence system produces relevant legal effects or that significantly impact the interests of the person, including through the generation of profiles and the making of inferences, the latter may request human intervention or review. (…) Human intervention or review will not be required if its implementation is proven to be impossible, in which case the person responsible for the operation of the artificial intelligence system will implement effective alternative measures, in order to ensure the reanalysis of the contested decision, taking into account the arguments raised by the affected person, as well as the reparation of any damage generated.” 

Article 11 provides for significant human involvement in certain cases: “in scenarios in which decisions, predictions or recommendations generated by artificial intelligence systems have an irreversible impact or are difficult to reverse or involve decisions that may generate risks to the life or physical integrity of individuals, there will be significant human involvement in the decision-making process and final human determination.” 

Article 12 speaks of the right to receive fair treatment in respect of the implementation and use of the system of Artificial Intelligence. 

Chapter III of the Bill, Articles 13 to 18, deals with categorising the risk. Bill No. 2.338/2023, clearly mirroring the EU position, presents a risk classification structure. Chapter III is entitled “Classification of Risk” and refers to a “preliminary evaluation”[28]  and states:

“todo sistema de inteligência artificial passará por avaliação preliminar realizada pelo fornecedor para classificação de seu grau de risco 

“Every system of Artificial Intelligence shall pass through a preliminary evaluation to establish the classification of its degree of risk.”

The legislation also affords the opportunity to reclassify the risk by a competent authority.[29] Risk categorised as: high is caught by the strictest provisions of the regulation.  

Prior to its placement on the market or use in service, every artificial intelligence system must undergo a preliminary assessment carried out by the supplier to classify its degree of risk (Article 13).

Mirroring provisions in the European Union that Article also states that: 

  • Suppliers of general-purpose artificial intelligence systems shall include in their preliminary assessment the purposes or applications indicated, pursuant to article 17 of this law. 
  • There will be a record and documentation of the preliminary assessment carried out by the supplier for the purposes of accountability and accountability in the event that the artificial intelligence system is not classified as high risk. 
  • The competent authority may determine the reclassification of the artificial intelligence system, subject to prior notification, as well as determine the carrying out of an algorithmic impact assessment for the investigation in progress. 
  • If the result of the reclassification identifies the artificial intelligence system as high risk, the performance of an algorithmic impact assessment and the adoption of the other governance measures provided for in Chapter IV shall be mandatory, without prejudice to any penalties in the event of a fraudulent, incomplete or untrue preliminary assessment. 

Again, Article 14 mirrors position in the European Union in respect of prohibition of certain Artificial Intelligence systems:

Art. 14. The implementation and use of artificial intelligence systems is prohibited: 

I – that employ subliminal techniques that have the objective or effect of inducing the natural person to behave in a way that is harmful or dangerous to his or her health or safety or against the foundations of this Law; 

II – that exploit any vulnerabilities of specific groups of natural persons, such as those associated with their age or physical or mental disability, in order to induce them to behave in a way that is harmful to their health or safety or against the foundations of this Law; 

III – by the public authorities, to evaluate, classify or rank natural persons, based on their social behaviour or personality attributes, by means of universal scoring, for access to goods and services and public policies, in an illegitimate or disproportionate manner.” 

Article 15 addresses the use of Biometric identification systems and states that “within the scope of public security activities, the use of biometric identification systems at a distance, on a continuous basis in spaces accessible to the public, is only allowed when there is a provision in specific federal law and judicial authorization in connection with the individualized criminal prosecution activity” in respect of the following:

I – prosecution of crimes punishable by a maximum sentence of imprisonment of more than two years; 

II – search for victims of crimes or missing persons; or 

III – crime in flagrante delicto. 

Article 16 states that it will be the responsibility of the competent authority to regulate excessively risky artificial intelligence systems. 

High risk systems are those designated for the use for the following purposes: 

I – application as safety devices in the management and operation of critical infrastructures, such as traffic control and water supply and electricity networks; 

II – vocational education and training, including systems for determining access to educational or vocational training institutions or for the assessment and monitoring of students; 

III – recruitment, screening, filtering, evaluation of candidates, decision-making on promotions or termination of contractual employment relationships, division of tasks and control and evaluation of the performance and behaviour of people affected by such applications of artificial intelligence in the areas of employment, worker management and access to self-employment; 

IV – evaluation of criteria for access, eligibility, concession, review, reduction or revocation of private and public services that are considered essential, including systems used to assess the eligibility of natural persons for the provision of public assistance and security services; 

V – assessment of the indebtedness capacity of individuals or establishment of their credit rating; 

VI – dispatch or prioritization of emergency response services, including firefighters and medical assistance; 

VII – administration of justice, including systems that assist judicial authorities in the investigation of facts and in the application of the law; 

VIII – autonomous vehicles, when their use may generate risks to the physical integrity of people; 

IX – applications in the health area, including those intended to assist in medical diagnoses and procedures; 

X – biometric identification systems; 

XI – criminal investigation and public security, in particular for individual risk assessments by competent authorities in order to determine the risk of a person committing offences or reoffending, or the risk to potential victims of criminal offences or to assess the personality traits and characteristics or past criminal behaviour of natural persons or groups; 

XII – analytical study of crimes related to natural persons, allowing law enforcement authorities to search large sets of complex data, related or unrelated, available in different data sources or in different data formats, in order to identify unknown patterns or discover hidden relationships in the data; 

XIII – investigation by administrative authorities to assess the credibility of evidence in the course of the investigation or prosecution of offences, to predict the occurrence or recurrence of an actual or potential offence on the basis of the profiling of natural persons; or 

XIV – migration management and border control. 

Article 18 provides that the competent authority can update the list of excessive or high-risk artificial intelligence systems.

Chapter IV of the Bill, Article 19 to 26, sets down issues around the governance of the systems of Artificial Intelligence. 

Article 19 covers the precepts of the governance structures.  Article 20 provides that operators or providers (“agents”) of high-risk systems shall adopt specific governance measures and internal processes: 

I – documentation, in the format appropriate to the development process and the technology used, regarding the operation of the system and the decisions involved in its construction, implementation and use, considering all relevant stages in the life cycle of the system, such as the design, development, evaluation, operation and discontinuation stages of the system; 

II – use of tools for automatic recording of the system’s operation, in order to allow the evaluation of its accuracy and robustness and to ascertain discriminatory potentials, and implementation of the risk mitigation measures adopted, with special attention to adverse effects; 

III – conducting tests to evaluate appropriate levels of reliability, according to the sector and the type of application of the artificial intelligence system, including robustness, accuracy, precision and coverage tests; 

IV – data management measures to mitigate and prevent discriminatory biases, including: 

a) evaluation of the data with appropriate measures to control human cognitive biases that may affect the collection and organization of data and to avoid the generation of biases due to problems in classification, failures or lack of information in relation to affected groups, lack of coverage or distortions in representativeness, according to the intended application, as well as corrective measures to avoid the incorporation of structural social biases that may be perpetuated and amplified by the technology; and 

b) composition of an inclusive team responsible for the design and development of the system, guided by the search for diversity. 

V – adoption of technical measures to enable the explainability of the results of artificial intelligence systems and measures to provide operators and potential impacted parties with general information on the operation of the artificial intelligence model employed, explaining the logic and criteria relevant to the production of results, as well as, upon request of the interested party, providing adequate information that allows the interpretation of the results concretely produced, respecting industrial and commercial secrecy. 

Public authorities when hiring, developing or using artificial intelligence systems considered to be of high risk, are required to adhere to specific measures, (Article 21) 

Article 22 provides for an “algorithmic impact assessment of artificial intelligence systems” which is a requirement of artificial intelligence agents, of high risk Artificial Intelligence systems. 

Article 23 states that such an assessment will be carried out by professional/s with the technical, scientific and legal knowledge necessary to carry out the report and with functional independence. An evaluation methodology is set out (Article 24)

Article 25 states that the algorithmic impact assessment “will consist of a continuous iterative process, carried out throughout the entire life cycle of high-risk artificial intelligence systems, requiring periodic updates. “

Article 26 provides for the conclusions of the impact assessment to be made public after commercially sensitive material has been protected.  

Chapter V refers to issues of civil liability. Article 27[30] states that when a person has involvement with a system of Artificial Intelligence of high risk, or excessive risk, the supplier or operator must respond proportionately to the question of damages that arise in accordance with its level of participation in the damage. When, however, the individual interacts with a system of Artificial Intelligence which is not of high risk the culpability of the agent that caused the damage is presumed, and reverses the onus of proof in favour of the victim. 

Chapter VI sets out good practice and governance codes. Chapter VII covers the area of communication of serious incidents. 

Article 31 states that:

“Artificial intelligence agents shall report to the competent authority the occurrence of serious security incidents, including when there is a risk to the life and physical integrity of persons, the interruption of the operation of critical infrastructure operations, serious damage to property or the environment, as well as serious violations of fundamental rights, in accordance with the Regulation.”

Chapter VIII deals with the fiscalisation and supervisory aspects of Artificial Intelligence. Section I deals with the competent authority. Section II deals with administrative sanctions.

The competent authority is provided for in Article 32 which states that “the Executive Branch shall designate a competent authority to ensure the implementation and supervision of this Law.” 

Articles 33, Article 34 and Article 35 contain other provisions in respect of the function of the competent authority. Article 36 contains provisions on fines stating that the competent authority can issue a warning; a simple fine limited to R$ 50,000,000.00 (fifty million reais) or up to 2% (two percent) of its revenue, of its group or conglomerate in Brazil in its last fiscal year, excluding taxes. 

Similar to the position in the EU Articles 38, 39, 40 provide for the use of regulatory sandboxes. Article 40 states that the competent authority  “shall issue regulations to establish the procedures for requesting and authorizing the operation of  regulatory sandboxes, and may limit or interrupt their operation, as well as issue recommendations, taking into account, among other aspects, the preservation of fundamental rights, the rights of potentially affected consumers and the security and protection of personal data that are subject to processing.” 

Article 41 states that participants in the AI regulatory sandbox “remain liable in accordance with applicable liability law for any harm inflicted on third parties as a result of the experimentation taking place in the sandbox.” 

Article 42 provides for an exception to copyright infringement stating:

The automated use of works, such as extraction, reproduction, storage and transformation, in data and text mining processes in artificial intelligence systems, in activities carried out by research and journalism organizations and institutions, and by museums, archives and libraries, does not constitute an infringement of copyright, provided that: 

I – does not have as its objective the simple reproduction, exhibition or dissemination of the original work itself; 

II – the use occurs to the extent necessary for the purpose to be achieved; 

III – does not unjustifiably harm the economic interests of the holders; and 

IV – does not compete with the normal exploitation of the works. 

Article 43 provides for a publicly accessible database for high-risk systems containing the public documents of the impact assessments with commercially sensitive information removed.

Chapter IX sets out final provisions on aspects of law such as non-exclusion of other provisions set down in law (Article 44).

Comment

The proposed Brazilian enactment as passed by the Senate retains its risk-classification system and how it classifies risk is comparable to the equivalent position in the European Union AI Act.[31] Comparing the text of the EU AI law[32] it will be seen that both versions indicate the adoption of a system of risk classification – and, interestingly, both adopt a risk management system.[33] There are also similarities in how fines are levied, and the use of regulatory sandboxes. The risk classification system was not present in the other antecedent bills on the subject in Brazil and so, cautiously, we can cite this as an example of Bradford’s the Brussels Effect[34] – the influence of European Union regulatory positions on the equivalent position in jurisdictions around the world in certain regulatory areas. Bradford gives as examples market competition, digital economy, consumer health and safety and environment. To this list, as already mentioned in Chapter 5, we can give a cautious welcome to the EU position on Artificial Intelligence. There may be other jurisdictions which follow the EU approach though it’s worth noting that the comparable provisions in the United States of America[35] and China[36] envisage setting global standards too.  

In the Brazilian legislation every AI system must implement a governance structure which includes transparency and security measures, set out in Chapter 4 of the Bill. High-risk AI systems must also include: (i) technical documentation with several characteristics of the system; (ii) log registers; (iii) reliability tests; (iv) discriminatory biases mitigation measures; and (v) technical explainability measures.[37]


The European Union position is set out in Article 9 of the original law as proposed by the Commission and states:

“A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.”

While the Brazilian legislature are looking at adopting a risk classification which refers to excessive risk or high risk, with every AI system subject to a preliminary evaluation, the European Union position uses, effectively, three distinct risk classifications: Unacceptable risk, High risk, Limited risk.[38]

Overall the proposed Bill in Brazil is forward-looking and concerned with the rights of individuals, with mention to of the rights of workers. Interestingly it deals with the issue of civil liability and creates a distinction between the liability in those cases where the system is a high risk and those systems which make decisions which are not high risk. In respect of the latter the burden of proof is reversed in favour of the victim. In respect of the former the supplier, or provider, of the Artificial Intelligence system is liable in damages to the extent of its involvement.

The aspects set down in Article 2 are also laudable. They put forward a positive, progressive position on Artificial Intelligence in Brazil which both embraces the technology but also sets its parameters: referring to concepts like the free development of personality, protection of consumer, access to information, protection of environment, equality, non-discrimination, plurality and respect for the rights of workers.  

The Bill is also notable in respect of its redress mechanisms. Numerous provisions refer to the rights of the person impacted to redress: to solicit an explanation or to contest a decision. There are transparency obligations too. 

Overall the Brazilian approach presents an impressive array of provisions across the sweep of the technology which are fundamentally based in the rights of the individual. The legislature has clearly gone to great lengths to define issues it anticipates will be important and to set down clear governing provisions for a variety of scenarios. It also gives a clear indication that while Artificial Intelligence systems are acceptable there will be times when human intervention is necessary.

Of course interpretation from the Court in respect of various aspects can still be anticipated – and this is appropriate as market conditions could change in unanticipated ways. The interpretation of Article 27 on civil liability is a case in point as there is a clear distinction made between those systems which apply a high risk consequence, where proportional damages arise, and those which do not involve a high risk, and where a reverse burden in favour of the victim arises. The risk classification system set down in Chapter III will likely link in to the application of Article 27. Final Congressional approval and enactment of the Brazilian draft law might take a few more years.[39]

Read other chapters of the Book online – Free


[1] https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[2] See Bradford, The Brussels Effect, 2020.

[3] Examples given in The Brussels Effect include: market competition, the digital economy, and the environment. To this list can now, evidently, be added the field of Artificial Intelligence. 

[4] https://www.statista.com/topics/6949/social-media-usage-in-brazil/#editorsPicks

[5] https://www.ft.com/content/fda15a48-b6ab-44fe-9bc0-1127feedaa80

[6] https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[7] https://oecd.ai/en/dashboards/policy-initiatives/http:%2F%2Faipo.oecd.org%2F2021-data-policyInitiatives-27104

[8] Estratégia Brasileira de Inteligência Artificial

[9] https://artificialintelligenceact.eu/developments/

[10] https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[11] https://oecd.ai/en/dashboards/policy-initiatives/http:%2F%2Faipo.oecd.org%2F2021-data-policyInitiatives-27104

[12] https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[13] https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[14] https://oecd.ai/en/dashboards/policy-initiatives/http:%2F%2Faipo.oecd.org%2F2021-data-policyInitiatives-27344

[15] Some of the fields of knowledges promoted by the programme are data science, cybersecurity, the Internet of Things, cloud computing and robotics. https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[16] https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[17] Bill No 5.051/2019; Bill No 872/2021; and Bill No 21/2020

[18] https://www25.senado.leg.br/web/atividade/materias/-/materia/157233

[19] https://legis.senado.leg.br/comissoes/comissao?codcol=2629

[20] https://www12.senado.leg.br/noticias/materias/2024/12/10/senado-aprova-regulamentacao-da-inteligencia-artificial-texto-vai-a-camara

[21] A versão aprovada nesta terça-feira manteve fora da lista de sistemas considerados de alto risco os algoritmos das redes sociais — decisão que atendeu a pedidos dos senadores oposicionistas Marcos Rogério (PL-RO), Izalci Lucas (PL-DF) e Mecias de Jesus (Republicanos-RR) e que provocou o lamento de alguns parlamentares governistas. Fonte: Agência Senado (https://www12.senado.leg.br/noticias/materias/2024/12/10/senado-aprova-regulamentacao-da-inteligencia-artificial-texto-vai-a-camara)

[22] https://www12.senado.leg.br/noticias/materias/2024/12/10/senado-aprova-regulamentacao-da-inteligencia-artificial-texto-vai-a-camara

[23] https://www.derechosdigitales.org/wp-content/uploads/Brazil-Bill-Law-of-No-21-of-2020-EN.pdf

[24] Art 2. 

[25] Art 4.

[26] https://legis.senado.leg.br/sdleg-getter/documento?dm=9347593&ts=1698248944489&disposition=inline&_gl=1*1oqxom7*_ga*MTMxOTQ1Njg5NC4xNjk4NzU3MjQ1*_ga_CW3ZH25XMK*MTY5ODc1NzI0NC4xLjEuMTY5ODc1NzMwMy4wLjAuMA..

[27] I – sistema de inteligência artificial: sistema computacional, com graus diferentes de autonomia, desenhado para inferir como atingir um dado conjunto de objetivos, utilizando abordagens baseadas em aprendizagem de máquina e/ou lógica e representação do conhecimento, por meio de dados de entrada provenientes de máquinas ou humanos, com o objetivo de produzir previsões, recomendações ou decisões que possam influenciar o ambiente virtual ou real;

[28] Section 1 of Chapter III.

[29] Article 13. Competent Authority defined in Article 4. 

[30] § 1o Quando se tratar de sistema de inteligência artificial de alto risco ou de risco excessivo, o fornecedor ou operador respondem objetivamente pelos danos causados, na medida de sua participação no dano. 

§ 2o Quando não se tratar de sistema de inteligência artificial de alto risco, a culpa do agente causador do dano será presumida, aplicando-se a inversão do ônus da prova em favor da vítima. 

[31] See Tito Rendas, Ivar Hartmann, From Brussels to Brasília: How the EU AI Act Could Inspire Brazil’s Generative AI Copyright Policy, GRUR International, 2024;, ikae027, https://doi.org/10.1093/grurint/ikae027

[32] https://artificialintelligenceact.eu/the-act/

[33] See Article 9.

[34] Bradford, The Brussels Effect, (OUP) 2020. 

[35] See https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

[36] See https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm art 6.

[37] https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[38] See 5.2.2 of Explanatory Memorandum to the initial European Commission proposal for an AI Act which refers to Unacceptable, High Risk, and no risk. https://artificialintelligenceact.eu/wp-content/uploads/2022/05/AIA-COM-Proposal-21-April-21.pdf The European Parliament subsequently adopted “Limited Risk” https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[39] Tito Rendas, Ivar Hartmann, From Brussels to Brasília: How the EU AI Act Could Inspire Brazil’s Generative AI Copyright Policy, GRUR International, 2024;, ikae027, https://doi.org/10.1093/grurint/ikae027