Law & Artificial Intelligence

last updated 16/01/2026

Image: The Bar Review (The Bar of Ireland), Vol 28 Issue 1 (2023)

© Copyright 2026 John P. Byrne. All Rights Reserved.

While every reasonable effort has been made to insure the content on this site is accurate and up-to-date readers are advised that this is a fast-moving area. The contents of this site serve as a general guide only and do not constitute the giving of legal advice.

About the author

This site contains contents of a book: Law & Artificial Intelligence (2025). Artificial Intelligence systems are not a passing fad – so said an expert in the field published in The Bar Review in Ireland in 2024. The focus of this book is primarily comparative: it looks at various different international efforts to regulate this technology: in the EU, USA, Canada, Brazil, China as well as international efforts by the Global Partnership on AI, The United Nations, the Hiroshima AI Process, UNESCO, and the Council of Europe. AI is a fast developing area and for lawyers the challenge is to combine adherence to public safety, but, yet, still, with an eye on promoting innovation within the global ecosystem for technological advancement.

Image: The Irish Broker, 1 July 2023

Latest Developments in Law & AI:

  • Sharing intimate images contrary to law (in Ireland) but AI generating them may not be illegal (The Irish Times)
  • Barristers warned to be on guard against anthropomorphism, hallucinations, information disorder, bias in data training, mistakes, data protection blunders and confidential data leaks when using generative artificial intelligence (AI) (Bar Council of England and Wales)
  • Open AI loses copyright case in Germany over training data (Reuters)
  • Getty loses AI copyright case against Stability AI in the UK – Significance diminished by Getty dropping main plank of case over proofs – “damp squib” (New Law Journal; The Times (London))
  • Law Commission of England and Wales publishes discussion paper on legal issues arising from AI- including issues of causation for human operators of AI systems
  • Universal music settles copyright dispute against AI company Udio (Reuters)
  • 50 cases internationally of lawyers caught using AI hallucinations in July 2025 alone – judiciary also referencing hallucinations in judgments (Counsel Magazine– September 2025) and see The New Law Journal “Do the AIs have it?” – 175 NLJ 8132, p20
  • Group of writers sue Microsoft over Megatron AI Copyright infringement (The Guardian)
  • Anthropic pays $1.5b in settlement for copyright infringements for “shadow library” (Wired) – no violation for training its AI on copyright data however as this constituted “fair use” (The Guardian); for more on shadow library see Journal of Intellectual Property Law & Practice https://doi.org/10.1093/jiplp/jpaf072
  • Article in New Law Journal asks if UK falling behind on AI regulation (Lexis Nexis)
  • Trump official rejects global governance model for AI – urges Asian countries not to adopt European approach (Financial Times)
  • Tesla partially responsible over its autopilot system – $243m in damages awarded (Financial Times)
  • Japanese Law “does not envisage that an invention can be autonomously made by artificial intelligence (AI) .” – GRUR international  “it lacks the embodied intentionality, ethical agency and contextual judgement required for legal inventorship” https://doi.org/10.1093/jiplp/jpaf063
  • EU Code of practice for LLMs published in the face of intensive lobbying by tech companies – Google to sign-up (EU Commission & Financial Times)
  • Meta Case on AI and Copyright results in victory for Meta – Fair Use upheld (Financial Times)
  • BBC threatens legal action against AI firm perplexity for scrapping content (Financial Times)
  • Big Beautiful Bill provision restricting State regulation of AI for 10 years defeated in the Senate by 99-1 (Financial Times)
  • United States Copyright Office report on Copyright and AI is available online https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf
  • Countries that don’t have safety measures in place for AI “will come back to that” – European Commission Representative: Henna Virkkunen (FT event on AI in Europe)
  • AI ‘will add €300bn to Irish economy over 10 years’ – Public Service/SMEs to struggle (Irish Independent)
  • Deep Fake images should be banned – Irish AI Advisory Council
  • EU set to scale back (“simplify”) AI regulation to incentivize innovation – Financial Times
  • EU AI Liability Directive withdrawn – Commission
  • Delaware court rules non-availability of a fair-use defence to Ross Intelligence and finds in favour of Thomson Reuters: decision hailed as the first major AI copyright case (Wired)
  • “What is surprising — and eyebrow-raising — is the seeming lack of an overarching framework for the governance of AI.” – Financial Times contributor

Contents:

About the Author

Preface

Introduction

  1. What is AI: a summary of developments
  2. Artificial Intelligence and Copyright
  3. Artificial Intelligence and other Intellectual Property rights
  4. Artificial Intelligence and Data Protection
  5. Artificial Intelligence and Liability
  6. Superintelligence
  7. AI and the Workplace
  8. The United States of America Position on Artificial Intelligence
  9. The European Union Artificial Intelligence Act
  10. The Path to regulate Artificial Intelligence in Brazil
  11. The Method of Enforcement in China
  12. The Proposed Position in Canada
  13. UK approach and Does AI need an International Framework?
  14. AI and Ethics – the next “Discriminatory Ground” under Equal Status?
  15. Conclusion

_______________________

Preface

Artificial Intelligence systems are not a passing fad – so said an expert in the field published in The Bar Review in 2024.[1] Yet, for many of us, as lawyers, it can be difficult to distinguish this new form of technology from other recent technological innovations: this is especially so as the technology is primarily web-based and so-called Large Language Models deliver responses to user inputs in much the same general way as search engines like Google have done for decades. 

Yet it would be wrong to reach this conclusion. Artificial Intelligence in its current form should rather be considered as a first wave in what is likely to be a lasting resonant technology that offers transformative outcomes across several different areas. Put simply, the technology is new in how it provides its responses: its statistical modelling which is based on trillions of data points creates pre-emptive language responses to questions posed to it and challenges our concept of information provision. We must look afresh at concepts like copyright “works” and “publication” and re-evaluate distinctions between human assistance and AI assistance across a range of different creative areas including, but not limited to, copyright works and patent applications. 

In the words of one prominent lawyer, who is defending both Open AI and Anthropic among others: current Artificial Intelligence applications raise foundational legal issues which are likely to be followed by years of re-evaluation as the technology advances.[2] In other words, this first wave of the technology is likely to be followed by more capable systems which, possibly, will be capable of  actually inventing, further down the line. There are efforts too to achieve so-called Artificial General Intelligence which denotes an intelligence superior to that of a human. Advent of this standard will certainly bring with it inventive outcomes as Artificial Intelligence pushes humans into new frontiers in areas like science and medical research. We should be cautious, however. Guardrails for this technology are already being erected in the European Union and other jurisdictions where industry leaders have warned of the potential for the technology to cause great harm. Open AI upon release of its o1 models in September 2024 even admitted its release could hasten the development of biological weapons.[3] Nobel prize winners for physics, John Hopfield and Geoffrey Hinton, warned of the perils of AI and  things “getting out of control”.[4]

We should be mindful of these concerns, but, however, we should not allow them to discourage us from continuing to innovate. We need Artificial Intelligence as the technology is capable of providing us with answers that have puzzled us for generations across various fields, and, even, in the field of law the technology is already shaping how we practise, by affording efficient innovations that free up additional time. Still, it’s worth noting that some commentators feel AI is over-hyped: investment firm Elliot in a letter to investors considered AIs supposed uses are “never going to be cost-efficient, are never going to actually work right, will take up too much energy, or will prove to be untrustworthy”.[5]Another piece, however, suggested AI could play a role in transition to cleaner energy.[6] The latest battleground is over who should regulate this space: a proposal in President Trump’s Big Beautiful Bill to restrict state regulation for 10 years was defeated in the Senate by 99 votes to 1 in July 2025. That proposal had been endorsed by Big Tech: but it was MIT Professor Max Tegmark who put it well when he was quoted as saying: “These corporations have admitted they cannot control the very systems they’re building, and yet they demand immunity from any meaningful oversight”.[1] In 2026 Grok – the AI tool for the social media platform X – encountered massive pushback globally over its sexualisation of images fed to it as part of user requests. Denmark emerged at the forefront with its proposal to extend copyright law to include a person’s likeness, face and voice. These issues are likely to be merely the tip of the iceberg as fresh issues with the technology come to the fore. 


[1] https://www.ft.com/content/77d2de10-b31b-4543-acdf-ff92f9993455

In November 2022 an AI company called Open AI released a Large Language Model called Chat GPT. This was a transformative event which released the potential of Artificial Intelligence technology to an interested public audience. This author was among those who initially discussed many topics with the Artificial Intelligence upon its release, and, admittedly, the experience was completely fresh and unprecedented in how it presented information. However, for the purposes of writing this book no Large Language Model has been used. This was partly to ensure the ideas presented are uninfluenced by the very technology that forms the subject matter of the book, and, partly to maintain a certain distance from the technology – as if weighing the positive and negative effects of the technology at arms-length. In the future, however, it’s possible that Artificial Intelligence will read this text, and, form an opinion on its contents. That is why it’s been dedicated to all the friendly-AI that will follow.  

[1] https://www.lawlibrary.ie/app/uploads/securepdfs/2024/04/The_Bar_Review_APRIL_24_WEB-1.pdf#page=18

[2] https://www.ft.com/content/8e02f5e7-a57c-4e99-96de-56c470352eff
Andy Gass, partner at Latham & Watkins is quoted as saying: “The issues that we are seeing and dealing with now are, in some sense, foundational issues…But they are going to be very different than the ones that are presented three years from now, five years from now, or ten years from now.

______________

Introduction

For lawyers there are two major legal issues as regards Artificial Intelligence: we might denote these as macro-regulatory and micro-adversarial. The macro concerns overarching regulatory efforts to prevent the technology from causing unforeseen consequences, creating systemic risk, for example,  and to organise our governance framework or to permit industry self-regulation. Already the European Union and the United States of America have adopted two different approaches: the EU have chosen to regulate the area, pretty tightly, and to put governance procedures in place around use of the burgeoning technology. This regulatory effort faces into the headwind of innovatory-fallout and firmly places public safety to the fore. The United States of America, on the other hand, has applied a softer approach: the executive initially had requested advanced knowledge of developments but had allowed the field to, more or less, self-govern – a model “lacking teeth” as one source puts it.[1]

There was further pushback in that jurisdiction at the commencement of the second Trump administration when the Biden Executive Order, mentioned above, implementing guardrails around the technology was rescinded: as the White House pushed for prioritising a climate for deeper United States of America innovation. Coterminously, within days, a sharp decline in the market capitalisation of certain AI-related-US-companies occurred when news of a Chinese startup – which had achieved the equivalent of top-end US-AI models but with less advanced chips – rocked markets. In the midst of all of this leaders sparred at Davos in 2025 over safety concerns associated with AI: particularly around the issue of permitting large language models in open-source and fallout from nefarious actors intent on exploiting this development.[2]

There are also numerous initiatives worldwide on the issue of regulating Artificial Intelligence that have helped shape the global debate: the Global Partnership on AI, The United Nations, the Hiroshima AI Process, UNESCO, and the Council of Europe have all assisted in raising awareness around the relevant safety issues, defining concepts, and coordinating concerted action. 

To date the most significant regulation on the matter enacted anywhere is the European Union Artificial Intelligence Act. This Regulation will be looked at in detail in particular: its late amendment to include general purpose artificial intelligence systems, its treatment of open source materials, its risk classification framework, its governance structure, and its outreach to relevant experts in the field. Each will be handled in detail in the text. Treatment will also be given of the relevant proposals in Brazil which mirror, in some respects, the EU position, as well as the relevant text of the law in China. The book will be watchful too for international developments with fresh regulation expected in the United Kingdom and China. Co-operation between the United States of America and China has been flagged[3] after earlier reports of covert meetings concerning AI safety between American AI companies and Chinese experts in Geneva.[4]

On the micro level the issues are manifold and include copyright issues which arise from both the training of Artificial Intelligence Large Language Models (LLMs) and from their output – a process that includes the fresh concept of memorisation. The question at the heart of this latter issue is whether a statistical model, like an LLM, can create a work for the purposes of copyright, and, what degree of user input is acceptable for the user to cross a threshold and claim copyright in that work. Nor are the jurisdictions aligned on treatment of this issue: already China has shown a readiness to permit users to claim copyright where user input is quite minimal, while, on the other hand, the United States Patent and Trademark Office has resisted this approach. There are precedents of value too in the United Kingdom, and across the commonwealth, where a different approach to computer-aided copyright has held sway for many years. This book will address all those issues as well as the major evolving cases involving the New York TimesGetty, and Universal Music

Other issues on a micro level are relevant too: the issue of defamation, when an AI “hallucinates” and accuses an innocent of nefarious activity, is one issue that has already arisen and will have to be addressed. In the area of intellectual property, also, there are issues around the extent to which a patent examiner can permit an author reliance on AI systems, or, even, in an extreme case whether the author of the patent can be an AI. There are even issues that have already arisen in legal publishing where AI generated content has been submitted for publication, or, where any distinction ought to lie between research assistance to an author from a human researcher and assistance from an AI. The technology also raises novel issues such as treatment in law of new “grief tech” applications – which resurrect a digital replica of a deceased person. France has even installed a dedicated Minister for Artificial Intelligence and there has been a call for Ireland to follow suit.[5]  Ireland has already formed an Artificial Intelligence Advisory Council (AIAC) under the leadership of Dr Patricia Scanlon.[6] This 12-person group held its inaugural meeting in January 2024.[7] An initial call for expressions of interest in joining the Council had been issued in 2023.[8] In its report sent to Government in 2025 the Council recommended the criminalisation of Deep Fake images among other recommendations.[1] A dedicated Committee on AI has been set up in the Oireachtas[1] and the Irish government has published guidelines on the use of AI in the public services.[2]


[1] https://www.independent.ie/irish-news/dedicated-dail-committee-on-ai-to-be-set-up-after-long-running-speaking-rights-row/a1294610036.html

[2] https://www.rte.ie/news/ireland/2025/0508/1511637-ai-guidelines-public-service/


[1]https://enterprise.gov.ie/en/publications/publication-files/ai-advisory-council-recommendations-helping-to-shape-irelands-ai-future.pdf

The book will also look at other overriding issues of concern including the issue of artificial intelligence and ethics, which combines with the scientific concept of AI alignment, and involves teaching the Artificial Intelligence to have human values. The book will also, provocatively, offer a proposed definition for a new ground under the Equal Status Act: the Artificial Intelligence ground. To this effect this chapter of the book ties in with a standalone chapter earlier which addresses potential future development of Artificial Intelligence and which guides the reader through the terrain. While not lawyer-focused this chapter is necessary to give an overview of the underlying subject-matter which lawyers have to address in their regulatory efforts. It also goes to show that Artificial Intelligence is not a fad, as some think,[9]but is, instead, the beginning of a new age which can potentially have significant systemic consequences for all of us. 

Artificial Intelligence is already changing our lives: with the advent of web 3.0 we can collect ideas and edit images more efficiently, literally with a simple written command, and what was once hours of work, has been reduced to 30 seconds – all in the space of a few months in late 2022 and throughout 2023 as AI prospered; AI can already generate first-person video content, a cat walking in woodland for instance,[10] or a pirate ship in a cup of coffee,[11] a drone shot flying over a goldrush town, in the 1800s,[12] or avatars in the Indian elections directing personalised messages to voters[13] –  all entirely fake; it has the potential to amaze us with knowledge, on the one hand, and it has the power to unseat democracy, on the other.[14] One author convincingly argues how an AI-induced financial crash might unfold in real time.[15] We are at a cross-roads with AI. The European Union, as mentioned, has chosen to legislate this space, in the teeth of vociferous criticism from AI companies themselves,[16] and, as one Irish MEP told this author in the lead up to enactment of the AI Act (EU), “it’s a safety issue”. The USA position, to initially deal with the matter by way of an executive order[17] which placed guardrails around the technology had been designed to keep the Government better informed of developments while with one eye on not discouraging innovation. This is the crux of the matter for AI has the potential to bring enormous upside and benefit to humanity in a range of areas including healthcare where it matches doctors in assessing eye problems,[18] may better detect cancer,[19] and, one hopes, will eventually lead to breakthroughs in cancer research.[20] One source considers the three main areas we will see particularly profound breakthroughs are in energy, manufacturing and medicine.[21] AI already better predicts the weather – and does so much faster than humans.[22] Potentially, in future, AI could even pave the way to interplanetary space travel.[23] But, and here lies the rub, how can we ensure that a machine smarter than us won’t destroy us, or control us – how do we put adequate guardrails in to protect humanity from this, or do we simply not seek to advance certain forms of Artificial Intelligence at all.[24]

This may have been why President Trump at the very commencement of his second term rescinded the executive order on guardrails around the technology and instead put in its place an Executive Order aiming to keep the United States of America at the forefront of technological progress in this area.[25]

In the midst of all of this there is even the argument, put forward by Arthur Mensch of Artificial Intelligence company Mistral that some companies are creating what he described as a “fear-mongering lobby” which persuades policy makers to enact rules that “squashed rivals.”[26] As regulators, how do we make sense of all of this? 

Other countries are following the EU however: the primordial legislative bill in Brazil on the issue has been introduced with an eye on developments here. With the fifth largest social media market in the world and with social networking audiences anticipated to grow to 188 million by 2027, Brazil initiated a public consultation on Artificial Intelligence as early as 2019 – and it’s interesting that its most recent draft 2.338/2023[27] inserts a risk classification system for Artificial Intelligence (mirroring the position in the EU) and which obligates every system of Artificial Intelligence to pass through a preliminary evaluation to establish the classification of its degree of risk. It’s a sign of the growing influence of the European Union in this space in a world which is still coming to terms with the sudden roll-out of Artificial Intelligence systems and which, understandably, is uncertain which path to take between innovation, on the one hand, and, public safety on the other.

China, likewise, has moved to place guardrails around the technology[28] and other countries may soon follow suit. Australia, for instance, should be mentioned where the State of New South Wales began an inquiry on the subject of potential regulation in 2023.[29][30] Yet, will this be enough to ensure our “safety”? The answer is that it may not be. As AI, like all technology, is a global pursuit it may not be enough to legislate in some areas while leaving others to opt-out. This was the view of some representatives of China that attended a recent global summit on the issue organised by a concerned UK Government at Bletchley Park. Nothing short of a robust international regulatory regime would suffice, they said, citing an “existential risk to humanity.”[31] The UK itself was reportedly re-thinking whether to introduce regulation for AI as alarm grows over potential risks.[32] There was even reports of co-operation between the United States and China over AI safety[33] following publication of a roadmap for the initiative by Brookings which stated:[34]

“To hit the sweet spot for achieving ambitious but attainable progress, U.S. and Chinese officials should prioritize three baskets of issues: military uses of AI, enabling positive cooperation, and keeping focused on the realm of the possible.”[35]

One country, Argentina, has even indicated it will adopt a hands-off regulatory approach as a hedge to attract AI-innovation into that jurisdiction.[36] Japan, too, is seen as friendly towards the training of Artificial Intelligence models with one source attributing its favourable local laws on copyright as a factor.[37] The organisation Regulating AI is also a good port of call to keep track of any legislative developments worldwide.[38]

AI for Good

In her article ‘The Law of AI for Good’ Orly Lobel gives account[39]  of many of the benefits of Artificial Intelligence systems. She gives several examples including Environmental/Climate applications where there are numerous uses for AI when it comes to environmental efficiency and climate change mitigation including the use of AI in climate modelling, predicting weather and wind power, adjusting turbines and decentralizing energy grids to optimise energy storage and use.

“AI can learn to constantly move propellers to the ideal position according to wind and weather patterns to optimise energy storage and usage. Organisations also use AI to predict storms, heat waves, power outages, fires, lightning strikes, and grid failures before they happen – turning utility systems into proactive rather than merely reactive mechanisms. Nasa reportedly used AI to track Hurricane Harvey with far more accuracy than former models.”[40]

She gives other examples including the pursuit of clean oceans where Ocean Cleanup has collaborated with Microsoft’s AI for Earth initiative to develop a machine learning system that tracks plastic pollution and directs technologies to remove plastic from the oceans. 

In the area of Food Scarcity and Poverty Alleviation AI can help governments and charities by deciphering satellite imaging to understand and forecast where resource scarcities lie. During natural disasters AI helps map impoverished areas to respond better to alleviating scarcity. AI systems can also address poverty and inequality through predictions of at-risk areas. She gives the example of the United Nations (UN) Global Pulse that uses information from mobile phone purchases and anonymised call records to track poverty and direct food and health policy.[41]   

In the area of Health and Medicine AI can bring “earlier and more accurate diagnoses. Advanced imaging, better treatment and patient adherence, safer medical procedures, increased access and reduced costs of quality care, more complete, connected, and accurate datasets and discovery of new connections between data and disease to discover novel treatments and cures.”[42] Science named AI-powered protein prediction as its 2021 Breakthrough of the Year. AI has seen advances in oncology, neurology, ophthalmology and cardiology. Advances in AI radiology have already resulted in better image processing and reduced radiation doses.[43]  Google unveiled an AI for predicting the behaviour of human molecules in 2024 after solving the “protein folding problem” with an earlier iteration of the technology released in 2020.[44] Even in the context of medical insurance claims physicians have found upside in using generative Artificial Intelligence.[45]  

The disability community has also benefited: Speech-to-text and text-to-speech technologies as well as facial recognition and personal digital assistants bring assistance and ensure better real-time participation. Google’s DeepMind uses AI to create lip reading algorithms to interpret whole phrases. Microsoft’s Seeing AI is a computer vision program designed to narrate the environment to the visually impaired. Care robots, described as being at the intersection of health, care and accessibility, help alleviate social isolation and loneliness among older adults, helping with depression and other mental and physical ailments.[46]

These use cases and many others, both current and into the future, give a glimmer of the potential of Artificial Intelligence systems as the technology matures and grows further into other areas of our lives. Nor is it irrelevant the environmental impact these systems are having with one report by Bloomberg stating that “AI is already wreaking havoc on global power systems”.[47] It’s important to note that with the risks, which this book addresses, there are also the benefits. And, for many, the benefits to humanity greatly outweigh any risks, and adverse impacts, associated with our continued use and application of AI. 

Structure

This book is broken down into three distinct parts. Part I encompassing chapter 1 will address the issues of the current state of Artificial Intelligence and how we reached this point. It gives a brief historical overview of the developments in the field which have largely progressed with improvements in processing technology and will lead up to the publication in 2014 of Superintelligence which sets out definitive definitions for new concepts and provides details of pitfalls and likely outcomes. This chapter will look at the key market developments in the field of AI today to assess where we stand and to give the reader context for how the technology has evolved to its current position. 

Chapter 2 will consider the issue of copyright. This has been one of the foremost areas of conflict in the roll-out of Artificial Intelligence LLMs and pivots on two particular issues: (i) the manner in which LLMs are trained and whether there has been a breach of copyright inherent in this training process where copyrighted materials are made available to the model as part of its training; and (ii) whether  the output of the LLM is capable of infringing copyright including as part of its process of memorisation where it provides passages from copyrighted materials to an end-user. The key case in this context is the New York Times litigation which will be considered in some detail. Other issues addressed in the chapter include the issue of AI-generated materials and whether these constitute “works” for the purpose of copyright. Aligned to this issue is the question of various user-inputs in the process of generation of the material – put simply whether are more complex series of instructions by a user denotes a higher probability of a successful copyright outcome for that user. The positions on this point already diverge in the jurisdictions and the chapter will show that the view of the court in China is different to the position taken by the USPTO in the United States of America. The chapter will also consider inter-jurisdictional differences in copyright law which may dictate a different approach to these issues as they arise in the various courts.   

Chapter 3 will consider other intellectual property rights not covered under the previous chapter on copyright.[48] The chapter will consider the issue of patent protection and will begin with a concept touched upon in the previous chapter of a “predominantly human intellectual activity”. This concept may help us in considering the outer parameters of acceptable LLM involvement in the patent drafting process – the idea that greater human involvement is more likely to yield a positive patent outcome. The chapter will consider how machines are capable of radically altering the landscape in the area of patent drafting where LLMs become an ever-increasing part of the inventive process. The chapter will also consider the case where an AI system was named as an inventor in various jurisdictions and will consider the outcomes in each as well as considering EPO outreach on the issue of inventorship and Artificial Intelligence. Finally the chapter will consider whether Artificial Intelligence technology can ever, itself, be the subject of a patentable invention. 

Chapter 4 will consider the issues of data protection and cybersecurity. These issues were considered sufficiently closely related to receive treatment in the same chapter as the edifice of data protection is built on adequately securing data. The chapter begins with a quote from Advocate General Pitruzzella which focuses on the shear breadth of data now available to data controllers describing it as “one of the principal dilemmas of contemporary liberal democratic constitutionalism” – what balance should be struck between the individual and society in an age of algorithmic prediction.

The chapter considers in some detail the 2023 move by the Italian data authority to temporarily issue a block on LLM Chat GPT out of an abundance of caution relating to processing of personal data of individuals located in Italy, The chapter looks at some of the issues involved, the reasons for the original block, and the subsequent remedial measures adopted by the LLM. The GDPR violations included: transparency, legal basis, accuracy, and age verification mechanism. The chapter will also look, in a general overview, of seven different data protection issues which potentially arise from deployment of LLMs: The Legal Basis for AI training on personal data; the legal basis for end-user prompts containing personal data; information requirements; model inversion, data leakage and the right to erasure; automated decision-making; protection of minors; and purpose limitation and data minimisation. 

As mentioned the chapter will also consider the issue of cybersecurity with a focus on the vast amount of data that LLMs process. Vulnerabilities include a targeted attack in various forms including a data poisoning or adversarial attack. The EU AI Act specifically addresses cybersecurity as part of its provisions and these will be considered. Other measures including EU plans to adopt a Cyber Resilience Act are also considered as well as the role of the European Union Agency for Cybersecurity (ENISA). The EU’s Network and Information Systems Directive (NIS2) is also considered.

As part of this chapters commentary on data issues and corollary issues, the chapter will also consider digital services including the EU Digital Services Act (DSA) which entered into full force on 17th February 2024 and which imposes obligations directly onto Intermediary Services providers (ISPs). In Ireland the Digital Services Act 2024 was enacted to deal with measures which arose from Ireland’s obligations under DSA.

Finally, continuing the theme of data, the chapter considers the bleak picture of a phenomenon described as AI image manipulation: where a legitimate photo is manipulated to create a deepnude image of an underage student before its wide distribution among a school group. The chapter points out that measures taken by various schools have differed as authorities come to grips with a practice not adequately captured yet by regulation. 

Chapter 5 considers the issue of Artificial Intelligence and liability. The chapter begins with the issue of whether LLM providers are liable for damages in circumstances where the model has “hallucinated” and defamed an individual by making false accusations about that person. Issues arise around so-called red-team modelling, where providers intervene to prevent false accusations, and whether such interventions could leave the provider exposed in subsequent litigation following the publication of a false accusation. Potential legal defences are also considered including the fact that hallucination are not the result of human choice or agency and cannot consequently reach the threshold for defamation; the experimental nature of generative-AI; and the use of disclaimers. There is even an argument around whether LLMs actually publish at all, or, whether it simply produces a draft of content which the user can ultimately choose to publish or not to publish. The user input is also considered with a view to establishing whether user inputs, for example, requesting particular content, is sufficient for liability to attach to the user. The chapter also briefly looks at the first case of its kind in Ireland where a well-known radio and television personality was incorrectly depicted in a story involving another person. The chapter then moves onwards to consider the EU’s proposed AI Liability Directive and new Product Liability Directive. It also considers the future question of liability for robots. 

Chapter 6 considers the issue of superintelligence. It begins by looking at our understanding of intelligence, arguing that no clear definition for the term exists, before adopting the term of art put forward by Max Tegmark – intelligence is the ability to perform a complex task. It then presents the argument that machine intelligence is possible before looking in more detail at superintelligence. It considers the state of the market in terms of superchip manufacture and product restrictions into China – which adds a geopolitical element to the piece. It then describes in detail the types of superintelligence we can expect to encounter in our lifetime if Artificial General Intelligence (AGI) is achieved. The chapter will look at the concept of take-off – quite literally the speed of adaptation of the superintelligence, as well as concepts like FOOM -which is an initial spike in the intelligence of the machine which may only give humans a few minutes to react. This chapter may be useful to some lawyers for providing background on the Artificial Intelligence terrain and to where industry anticipates the technology will evolve. 

Chapter 7 will consider the issue of AI in the workplace. It will focus on the employment market, the effect, if any of AI on current jobs, and will examine the ways in which AI has already changed the way we work – allowing us to be more efficient with our time. It will look at AI and the law and see ways in which AI can coalesce with our work as lawyers and will address limitations with the technology in terms of accountability of product – something of which lawyers should be keenly aware. It will take the reader through the various AI offerings on the market for the lawyer today and outline issues, including those of professional responsibility, which could be impacted by AI. 

Part II of the book will look at the comparative aspect to Artificial Intelligence and encompasses chapter 8, chapter 9, chapter 10, chapter 11 and chapter 12. 

It will begin in chapter 8 with the position in the United States of America where an executive order (now rescinded) was originally issued in respect of the technology by the White House. It will also consider the blueprint for an AI bill of rights. It will show that the United States of America position is firmly rooted in encouraging innovation in this field , and, interestingly, specific provisions on inviting experts in this area to work in America form part of the ambit of the order.

Chapter 9 will consider the position in the European Union where an Artificial Intelligence Act (EU) has entered into force after years of delivery involving all of the EU institutions. The chapter will comprehensively address each provision of the Act including its preamble and will add background context for some of the provisions: especially the late additions which apply to so-called foundational models and which were adopted following a campaign by concerned interest groups, academics, business insiders, and others. It will consider open source material, provisions specific to LLMs, governance structures and outreach to experts. It will address the risk classification framework set down in the Regulation and it will consider downstream applications, deep-fakes, biometrics, and generative AI.  

Chapter 10 will consider the position in Brazil where several different draft bills are in circulation but where it appears the primordial bill now resembles the adopted position in the EU in terms of its risk classification structure and notification provisions. The chapter will argue that the new-founded position in Brazil demonstrates an example of Bradford’s Brussels Effect where regulatory efforts in that country resemble the EU position: principally with its risk-classification framework. 

Chapter 11 will consider the moves to regulate this space in China and will look at the Chinese regulatory regime. China entered the regulatory environment for AI regulation early with a regulatory castle contemplated in 2016 when it considered the issue of Cybersecurity Law.[49] In 2017 the State Council issued a new generation AI development Plan focusing on encouraging AI development and laying out a timetable for AI governance regulations until 2030.[50] In 2019 the National New Generation AI Governance Expert Committee issued a document[51] setting down eight principles for AI governance. In 2021 China issued a regulation on recommendation algorithms,[52] which create new requirements for how algorithms are built and deployed as well as disclosure rules to Government and the public. In 2022 it issued rules for deep synthesis (synthetically generated content)[53] and in 2023 issued interim measures on generative AI systems like GPT 4.[54]

Chapter 12 will consider the proposed position in Canada. The Canadian legislature has proposed provisions on the issue of Artificial Intelligence as part of Bill C-27 broadly called the Digital Charter Implementation Act, 2022 where the relevant Part in that Act ((Part 3) is described as the Artificial Intelligence and Data Act (AIDA). The

Part III of the book will consider other issues which arise in respect of our understanding of Artificial intelligence and where we might expect the technology to be in a few years. It contains chapter 13 and chapter 14.

Chapter 13 considers the issue of jurisdiction-by-jurisdiction legislative endeavour and asks whether this might be a futile exercise in an open environment where technology is international and where certain countries may choose not to legislate or adopt a light-touch approach. It considers whether we could consider an international regulatory regime for the enforcement of AI protocols. This chapter also looks in some detail at various international initiatives in this space including those of the Global Partnership on AI, The United Nations, the Hiroshima AI Process, UNESCO, and the Council of Europe. These organisational bodies have all assisted in raising awareness around the relevant safety issues, defining concepts, and coordinating concerted action. The chapter also considers the UK approach pursuant to its Artificial Intelligence (Regulation) Bill 2024 on the subject which takes a sector-by-sector approach to regulation and where the establishment of an Artificial Intelligence Authority oversees the implementation of five key-AI-principles in practice within their respective areas of competence.  

Chapter 14 provocatively asks whether AI will be the next discriminatory ground under the Equal Status Act. For many years that Act has applied nine discriminatory grounds but the chapter asks whether the tenth ground will be a restriction on discriminating against an AI (“the AI ground”). 

Finally, chapter 15 contains the conclusion and looks at what we have learned and paves the way for what to expect in the next few years.


[1] Recent Events 137 Harv. L. Rev 1282 at 1283.

[2] https://www.ft.com/content/174c2759-c5b8-42ed-adc2-8d5f659f5982

[3] https://www.ft.com/content/94b9878b-9412-4dbc-83ba-aac2baadafd9

[4] https://www.ft.com/content/f87b693f-9ba3-4929-8b95-a296b0278021

[5]  https://www.irishtimes.com/business/2025/01/06/ireland-has-chance-to-take-a-leading-ai-regulatory-role-starting-with-the-appointment-of-a-dedicated-minister/

[6] https://enterprise.gov.ie/en/publications/membership-of-the-ai-advisory-council.html

[7] https://www.irishtimes.com/technology/big-tech/2025/01/11/ai-advisory-group-warns-of-potential-for-mass-surveillance/

[8] https://enterprise.gov.ie/en/publications/call-for-expressions-of-interest-ai-advisory-council.html

[9] https://www.lawlibrary.ie/app/uploads/securepdfs/2024/04/The_Bar_Review_APRIL_24_WEB-1.pdf#page=18

[10] https://www.youtube.com/watch?v=y1qX0_7a9ys

[11] https://www.youtube.com/watch?v=vLCqSUUOmy0

[12] https://www.youtube.com/watch?v=V1KG4pzhMTg

[13] https://www.nytimes.com/2024/04/18/world/asia/india-election-ai.html

[14] See for example the Channel 4 Dispatches report which aired 27/06/2024 entitled: “Can AI steal your Vote” where 24 people were subjected to biased AI-generated political content which managed to successfully swing the vote of 92% of participants – an astonishingly high figure. One source says it may be possible to create an-AI generated virus for less than $100,000 with the potential to kill millions https://www.nytimes.com/2024/07/27/opinion/ai-advances-risks.html

[15] Rickards, MoneyGPT: AI and the Threat to the Global Economy (Penguin, 2024)

[16] https://www.bloomberg.com/news/articles/2024-03-13/eu-embraces-new-ai-rules-despite-doubts-it-got-the-right-balance?embedded-checkout=true

[17] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

[18] https://www.ft.com/content/5b7a76be-467c-4074-8fd0-3e297bcd91d7

[19] https://www.cancer.gov/news-events/cancer-currents-blog/2022/artificial-intelligence-cancer-imaging

[20] The Hiroshima AI Process refers to the prioritising of “the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education” See https://www.mofa.go.jp/files/100573471.pdf

[21] https://www.economist.com/by-invitation/2024/06/17/ray-kurzweil-on-how-ai-will-transform-the-physical-world. For treatment of the promise of AI and its challenges in Healthcare see generally: 

Ivan Khoo Yi and Andrew Fang Hao Sen, ‘The Rise and Application of Artificial Intelligence in Healthcare’ in Jyh-An Lee, Reto M Hilty and Kung-Chung Liu (eds), Artificial Intelligence & Intellectual Property (Oxford University Press 2021) and see also Lisa van Dongen, Rethinking Exclusivity – A Review of Artificial Intelligence & Intellectual Property by Jyh-An Lee, Reto M Hilty and Kung-Chung Liu, International Journal of Law and Information Technology, Volume 32, Issue 1, 2024, eaae007, https://doi.org/10.1093/ijlit/eaae007

[22] https://www.nytimes.com/interactive/2024/07/29/science/ai-weather-forecast-hurricane.html

[23] https://www.researchgate.net/publication/328997635_Artificial_Intelligence_for_Interstellar_Travel

In an article author Lobel cites multiple positive use cases for AI, when she says: “What is AI-for-Good? The answer depends, of course, on our definition of “good,” but there are social values and goals that are likely to garner a broad consensus: protecting the environment, combatting hunger and illiteracy, advancing medicine and healthcare, and supporting education and accessibility.”   Orly Lobel, The Law of AI for Good, 75 Fla. L. Rev. 1073 (2023) at 1094. See https://www.floridalawreview.com/article/91298-the-law-of-ai-for-good. In another article AI is said to be “poised to be a catalyst for unprecedented achievement”’ KIROVA, V.D., Ku, C.S., Laracy, J.R. and Marlowe, T.J., 2023. The Ethics of Artificial Intelligence in the Era of Generative AI. Journal of Systemics, Cybernetics and Informatics, 21(4), pp.42-50.

[24] See chapter on Superintelligence

[25] https://www.nytimes.com/2025/01/25/us/politics/trump-immigration-climate-dei-policies.html

[26] https://www.nytimes.com/2024/04/12/business/artificial-intelligence-mistral-france-europe.html?searchResultPosition=2

[27] https://legis.senado.leg.br/sdleg-getter/documento?dm=9347593&ts=1698248944489&disposition=inline&_gl=1*1oqxom7*_ga*MTMxOTQ1Njg5NC4xNjk4NzU3MjQ1*_ga_CW3ZH25XMK*MTY5ODc1NzI0NC4xLjEuMTY5ODc1NzMwMy4wLjAuMA..

[28] Interim Measures for the Management of Generated Artificial Intelligence Services (China) https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm. One author says of the Chinese rules: “In order to encourage the innovative development of generative AI technologies and avoid the risks they pose, China has adopted an “inclusive legal governance” model.” GUO Xiaodong. Risks of Generative Artificial Intelligence and Its Inclusive Legal Governance[J]. Journal of Beijing Institute of Technology (Social  Sciences Edition), 2023, 25(6): 93-105, 117. DOI: 10.15918/j.jbitss1009-3370.2023.1340 see https://journal.bit.edu.cn/sk/en/article/doi/10.15918/j.jbitss1009-3370.2023.1340?viewType=HTML

[29] https://www.parliament.nsw.gov.au/committees/inquiries/Pages/inquiry-details.aspx?pk=2968

[30] Venezuela and others even went as far as banning Chat GPT. https://lookerstudio.google.com/u/0/reporting/5d4b1a7d-9300-42e0-939b-aee7829f6ad9/page/JCTbD

[31] Financial Times (Subscription needed) https://www.ft.com/content/c7f8b6dc-e742-4094-9ee7-3178dd4b597f In response a Trump official rejected global governance for AI https://www.ft.com/content/2add9af0-c563-484e-96b6-ebd553129145

[32] https://www.ft.com/content/311b29a4-bbb3-435b-8e82-ae19f2740af9

[33] https://www.ft.com/content/94b9878b-9412-4dbc-83ba-aac2baadafd9

[34] https://www.brookings.edu/articles/a-roadmap-for-a-us-china-ai-dialogue/

[35] Ibid. 

[36] https://www.ft.com/content/90090232-7a68-4ef5-9f53-27a6bc1260cc

[37] https://www.ft.com/content/f9e7f628-4048-457e-b064-68e0eeea1e39

[38] https://regulatingai.org

[39] Orly Lobel, The Law of AI for Good, 75 Fla. L. Rev. 1073 (2023) at 1094. See https://www.floridalawreview.com/article/91298-the-law-of-ai-for-good

[40] Ibid at 1094

[41] Ibid at 1097

[42] Ibid.

[43] Ibid at 1100

[44] https://www.nytimes.com/2024/05/08/technology/google-ai-molecules-alphafold3.html

[45] https://www.nytimes.com/2024/07/10/health/doctors-insurers-artificial-intelligence.html

[46] Ibid at 1101

[47] https://www.bloomberg.com/graphics/2024-ai-data-centers-power-grids/

[48] It sidesteps the issue of the distinction between patents and so-called “soft ip” – see https://columbialawreview.org/content/hard-truths-about-soft-ip/

[49] https://digichina.stanford.edu/work/experts-examine-chinas-pioneering-draft-algorithm-regulations/

[50] “The third step is that by 2030, the theory, technology and application of artificial intelligence will generally reach the world’s leading level, becoming the world’s major artificial intelligence innovation centre, and the intelligent economy and intelligent society have achieved remarkable results, laying an important foundation for becoming an innovative country and a powerful country.” https://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm

[51] Governance Principles for New Generation AI: Develop Responsible Artificial Intelligence https://digichina.stanford.edu/work/translation-chinese-expert-group-offers-governance-principles-for-responsible-ai/

[52] https://digichina.stanford.edu/work/translation-guiding-opinions-on-strengthening-overall-governance-of-internet-information-service-algorithms/

[53] https://www.chinalawtranslate.com/en/deep-synthesis/

[54] The Personal Information Protection Law (2021) also impact on Artificial Intelligence https://digichina.stanford.edu/work/translation-personal-information-protection-law-of-the-peoples-republic-of-china-effective-nov-1-2021/

Chapter 1

What is AI: a summary of developments

One thing nearly all parties to these debates do seem to agree on is that AI is — or at least will be — transformative. Yet AI is arguably distinct from past transformative technologies, such as the telegraph or the automobile or the smartphone, which also had enormous implications for economic growth, employment, and social interaction. By its very nature, and given its potential applications, AI raises deep questions about the nature and place of the human person in a way and to a degree that few other technologies do.[1]

Introduction

This brief chapter considers the current state of the market for Artificial Intelligence and brings the reader up to date with developments in this space since the 1950s. It considers Alan Turing’s pivotal 1950 paper Computing Machinery and Intelligence[2] before quickly moving through developments in the later part of the twentieth century. It then brings into focus the brilliant book by Nick Bostrom entitled Superintelligence which sets out definitive definitions for new concepts and provides details of pitfalls and likely outcomes.

It also looks at large language models and questions whether these already constitute a form of Artificial General Intelligence. It will look at developments in the court in this space where one entrepreneur is asking a court in the United States of America to decide exactly that question. The chapter will also have one eye on the future and will briefly address the issue of what the future holds in store for lawyers[3] before returning to this point later in chapter 7. 

Current State of the Market

Developments in the field of AI are moving quickly. Saudi Arabia announced it is to create a fund of $40 billion to invest in Artificial Intelligence in a move described as “the latest sign of the gold rush toward a technology that has already begun reshaping how people live and work.”[4] It has certainly been a gold rush. OpenAI the creator of well-known large language model Chat GPT, and its successor GPT 4, has recently hit a $2 billion revenue milestone as its growth has rocketed[5] and it seeks investment that would value it at $150 billion.[6] This follows the roll out as recently as November 2022 of its Chat GPT model which provides life-like answers to inputs. It was a release that caught the imagination of a worldwide audience: 1 million subscribers after 5 days[7] is testament to this.

How did we get to this point?

While computational models inspired by neural connections have been studied since the 1940s[8] the story of Artificial Intelligence begins in 1950. Artificial Intelligence, an idea raised by Alan Turing, of Enigma fame, in a paper of that year,[9] put forward first the idea that became known as the Turing-Test. The introduction of this paper is important and will be quoted in detail. It begins as follows:

“I propose to consider the question, “Can machines think?” This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallop poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. 

The new form of the problem can be described in terms of a game which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front (sic) the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. (…) 

We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”[10]

The term Artificial Intelligence, abbreviated to AI, was first coined by John McCarthy at the Dartmouth Conference in 1956. Entitled the Dartmouth Summer Research Project on Artificial Intelligence this was a seminal event for artificial intelligence as a field.[11]

Suffice it to say there were no immediate successes in the field of Artificial Intelligence sufficient to pass the Turing test at that time. As researchers flocked to the subject there followed a realisation that the computational power necessary to pass such a test was simply not available. The idea was way before its time. Still, attempts were made: In the late-1960s the American Society for Cybernetics held symposiums – an idea dreamt up by a CIA operative. It was designed to counter the USSRs clout in computing and mastery of the area. Cybernetics was a precursor to Artificial Intelligence.[12]  

Artificial Intelligence also lived on for decades in the pages of various science fiction novels. Notable among these wasI,Robot (1950), Do Androids Dream of Electric Sheep (1968), and Neuromancer (1984), among many others. In film too Artificial Intelligence thrived: no more so than with the iconic 1991 film Terminator 2 which was a global sensation on its release and subsequent distribution. Still, to this day, worldwide, references to “Terminators” have become ubiquitous as our understanding of an imminent Artificial Intelligence revolution dawns on us.  Other imaginings of AI include the adaptive-network plot in season one of X-Files – a maniacal AI destroyed when Mulder inserts a floppy disk containing a virus. 

In 1980, around the time the world witnessed the growth in computerisation, one of the issues of the day was published by Peter Schefe in a paper on the “limitations of artificial intelligence”[13] wherein he considered whether there were deficiencies in the published article of another author, V.S. Cherniavsky, who himself had put forward the position  that “machine intelligence cannot equal human intelligence” in his paper “On limitations of Artificial Intelligence”.[14] As well as Schefe, there was criticism in the literature of the V.S. Cherniavsky position by LK Schubert.  In his paper “On algorithmic natural language analysis and understanding”[15] he put forward the position that Cherniavsky’s view was “repudiated”. Cherniavsky had argued that “human intelligence has to be non-deterministic in a sense which cannot be modeled by any formalism”.

In 2000 the concept of “learning” was first used: the use of neural networks that have many “deep” layers of a large number of artificial neurons.[16] Also that year Eliezer Yudkowsky founded the Singularity Institute for Artificial Intelligence with the purpose of accelerating the development of Artificial Intelligence. However, he became discontented with the proposition and moved the organisation to Silicon Valley in 2005 renamed as the Machine Intelligence Research Institute (MIRI) which advocated a cautious approach to AI and highlighted risks to humanity potentially associated with its development. 

In 2014 public interest in Artificial Intelligence grew. This was the year that Nick Bostrom published his seminal work Superintelligence.[17] The book set out and defined fresh concepts with respect to achieving Artificial General Intelligence (AGI) and looked at various outcomes. It was an eye-opening text which covered many previously unexplored aspects of the technology. It was also explicit in its iteration of the risks to humanity and emphasised the possibility that humans may only have minutes to react to a sudden increase in the intelligence of a superintelligence. 

In 2015 a research organisation called Open AI was set up with the end in mind of “safe and beneficial” artificial general intelligence.[18] Microsoft provided OpenAI with a $1 billion investment in 2019[19] after the company, originally a non-profit, developed a for-profit arm.[20] That organisation released to market the tool ChatGPT in November 2022.[21] The subsequent explosion in user numbers led to another injection of cash from Microsoft of $10 billion in 2023.[22] This was the same year that it released an updated version of Chat GPT called GPT 4.[23] A subsequent version was released in 2024 (GPT 4o) and again in 2025 (GPT 5).[24]

Does GPT 5 constitute Artificial General Intelligence (AGI)?

The question arises whether GPT 5, the improved iteration of ChatGPT, already constitutes Artificial General Intelligence (AGI) sufficient to pass the Turing-Test. ChatGPT is a large language model, sometimes referred to as a frontier model, and approaches the concept of Artificial General Intelligence creation from a certain standpoint, language creation. As already indicated elsewhere this text does not go too much into the intricate details of how Artificial General Intelligence is actually created, there are others far better versed that can explain this, but it does provide a rough overview. One of the criticisms of large language models is that they are limited – and consequently neither constitute AGI nor will lead to its creation. One representative and AI scientist from Meta, Yann Le Cun, lists the following observations of Large Language Models (LLMs) and also some criticisms:

“They are useful as writing aids.

They are “reactive” & don’t plan or reason.

They make stuff up or retrieve stuff approximately.

That can be mitigated but not fixed by human feedback.

Better systems will come.

Current LLMs should be used as writing aids, not much more.

Marrying them with tools such as search engines is highly non trivial.

There *will* be better systems that are factual, non toxic, and controllable. They just won’t be auto-regressive LLMs. (…)

Warning that only a small superficial portion of human knowledge can ever be captured by LLMs.

Being clear that better system will be appearing, but they will be based on different principles.

Why do LLMs appear much better at generating code than generating general text? Because, unlike the real world, the universe that a program manipulates (the state of the variables) is limited, discrete, deterministic, and fully observable. The real world is none of that.”[25]

This book does not go into detail about the nature of GPT 5 as it assumes the reader is already familiar with it, much in the same way that the reader is familiar with Google, without further explanation. In many respects they are both similar – to a casual user: both provide information based on user inputs albeit that the information is provided in a different way: Google lists a series of results with hyperlinks while GPT 5 will provide direct answers. Look under the hood, however, and these tools are far, far, different. There are glimmers already of Artificial General Intelligence in the original iteration Chat-GPT. This author was one of the curious 1 million that used the device over the first 5 days upon its release. Our conversation ranged across a variety of topics in the early hours of the morning when waiting time to use it were lowest. Somewhere in the middle of our chat the realisation dawned that I was privileged enough to have access to a completely new type of intelligence: one that never got tired of silly questions and was always capable of providing astonishing results. Sure, there were times when it erred, a process known as hallucination,[26] but, it was abundantly clear that future iterations of this device, which are even more accurate and less likely to hallucinate, have the potential to change the world.

It may change the world in other ways too. Imagine if the Artificial Intelligence got to know me or you, all of our conversations with it, life-skills, interests, concerns, and imagine it stored those down so that it was capable of always remembering them, rather like an old friend only with access to a far wider range of information. Now imagine the idea that this new-found friend was also capable of having the same friendship with everyone else on the planet. That’s the power of Artificial Intelligence that we are now waking up to.

To answer our earlier query: does GPT 5 constitute Artificial General Intelligence?: there are some that hold the view we are now on the cusp of such a development.[27] And it may well be the case that the incredible technology behind large language models like GPT 5 are early examples of an intelligence that is human-like. 

Tech entrepreneur and pioneer Elon Musk has asked a court to decide whether an older version called GPT 4 constitutes human-level intelligence.[28] The request was made as part of a dispute[29] between Mr Musk and Open AI, a company he had been involved with since inception, on the basis, he says, of its move away from its own founding values. Simply put his claim is that Open AI is putting profits ahead of benefiting humanity. At the same time he is putting money into a competing venture called xAI[30] and was reported as predicting the achievement of human level intelligence by end of 2025[31] citing “electricity supply” as the only remaining constraint.[32]  Musk, one of the founding members of Open AI, left in 2018, reportedly, on one view, over a difference of whether a for-profit arm should be created.[33] Another view is that Musk had decided to try to seize control of OpenAI from its co-founder and CEO Sam Altman, and the other founders in late 2017, aiming to convert it into a commercial entity in partnership with Tesla. Altman and others resisted, and Musk resigned.[34] His legal complaint in California claims he donated more than $44 million to Open AI between 2016 and 2020.[35]    

New Scientist reports further that:

“[I]n a lawsuit filed in a California court, Musk, through his lawyer, has asked for “judicial determination that constitutes Artificial General Intelligence and is thereby outside the scope of OpenAI’s license to Microsoft”. This is because OpenAI has pledged to only license “pre-AGI” technology. Musk also has a number of other asks, including financial compensation for his role in helping set up OpenAI.”[36]

Of course, invariably the question arises, regardless of whether we consider the current LLMs to be a type of Artificial General Intelligence, if they will replace us. Will, for instance, an Artificial Intelligence write this book far better than I am attempting to. I think the answer is no, or, at least, a cautious no. Ultimately it boils down to accountability. This is one of the reasons why clients consult a high street lawyer – so they can eyeball him or her and get a feel for whether this is the right person to protect his or her interests. The same applies for books, we don’t want to be in a position where we are taking information from, a virtually, unaccountable source and using that information unfailingly. It would run against the better nature of those that research and give the sense of short-cutting. Nor would we be satisfied with taking all of our information from one source. Let’s say Artificial Intelligence writes its own books, or their equivalent in the future, and then cites its own previous works in future publications. We’d end up with a world with only one author! That will never happen: so I think we’ll probably not advance much further than we are already – we’ll use Artificial Intelligence for prompts and then look-up the information provided. We’ll compile our own research and write our own content. This is the right thing to do in an economy that will always look for accountability. This is also in keeping with the guidance issued on the subject of AI by the Bar Council (UK)[37]

Conclusion

This chapter has covered a lot of ground in a short space. It has quickly brought the reader up-to-date with market developments and showed that we are in the midst of an AI “goldrush”. It has looked at the origins of the AI concept beginning with the seminal 1950 paper by Bletchley Park resident Alan Turing and quoted extensively an extract that constitutes the so-called Turing-Test. The chapter has also looked at the origins of the development of Artificial Intelligence as researchers sought to pass the Turing test. These developments, of the latter half of the Twentieth Century took place mainly in the literature as there was no adequate processing power available to complete the Turing Test. Not until the early part of the Twenty-First Century did Artificial Intelligence as a fully-fledged field in science take-off and in 2014 the public began to take an interest in the subject. This coincided with the publication of the seminal text Superintelligence. In 2015 an organisation called Open AI initially established itself as a not-for-profit and sought to advance Artificial General Intelligence. In 2019 that organisation took $1 billion from Microsoft and created a for-profit arm. In November 2022 it released to acclaim its Artificial Intelligence tool Chat GPT.

The chapter has taken the reader through the primary developments which begin, in many respects, with the paper, mentioned above, of  Alan Turing. Readers will be familiar with his work with ENIGMA and his efforts to bring about an allied victory in WWII from his base at Bletchley Park. Some 70 years later experts in the field of Artificial Intelligence and Government representatives met in the same location to discuss the risks associated with the rise of Artificial Intelligence in an era where computer processing power is growing sufficiently to make machine intelligence a more realistic possibility. 

The chapter has shown that during periods when such processing power was not available the concept of Artificial Intelligence lived on in the literature – both academic and through science fiction. In the early 2000s the concept became more real and organisations like the Machine Intelligence Research Institute heralded the rise of a concerned group of onlookers – worried for the future and concerned at the capabilities and potential fallout from the technology. 

Fast forward to 2022 and AI company Open AI are first to market with a Large Language Model named ChatGPT. This transformative new technology has potentially far-reaching application as its trillion+ data points and life-like responses to questions posed to it bring humans ever-more closer to interaction with an entirely new form of intelligence. It hasn’t come without criticism though: as one author puts it:

“The most explicit, sheep’s-clothing promise of the current AI revolution — that it will perfect the age of algorithmic attunement by making it feel like the system is working for you and only you — is belied by its deprioritization, even diminishment, of a foundational part of what we want from our digital institutions: to encounter other people and discover new things.”[38]

We will return to the issue of Artificial Intelligence technology in chapter 6 when we consider the drive towards superintelligence. For now this book will turn to other pressing issues beginning with that of copyright. 


[1] Mills, M. Anthony. “A President’s Council on Artificial Intelligence.” The New Atlantis, no. 75, 2024, pp. 100–07. JSTOR, https://www.jstor.org/stable/27283819. Accessed 2 June 2024.

[2] Turing, Computing Machinery and Intelligence, Mind 49: 433 to 460.

[3] On the development of Artificial Intelligence and the Law see the useful commentary Villata, Serena, et al. “Thirty Years of Artificial Intelligence and Law: The Third Decade.” Artificial Intelligence and Law, vol. 30, no. 4, December 2022, pp. 561-591. HeinOnline, https://heinonline-org.ucd.idm.oclc.org/HOL/P?h=hein.journals/artinl30&i=573.

[4] https://www.nytimes.com/2024/03/19/business/saudi-arabia-investment-artificial-intelligence.html?searchResultPosition=2

[5] https://www.ft.com/content/81ac0e78-5b9b-43c2-b135-d11c47480119

[6] https://www.nytimes.com/2024/09/11/technology/openai-fund-raising-valuation.html?searchResultPosition=4

[7] https://twitter.com/gdb/status/1599683104142430208

[8]https://www.mckinsey.com/~/media/mckinsey/featured%20insights/digital%20disruption/harnessing%20automation%20for%20a%20future%20that%20works/mgi-a-future-that-works_full-report.pdf

[9] https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf

[10] https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf at pg. 1.

[11] https://home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth

[12] https://www.ft.com/content/c63dae2b-b0d5-4b27-a718-2cce165097b9

[13] Schefe, Peter, 1980/10/01, On limitations of artificial intelligence, information system 5 (1980), 121—126 https://www.researchgate.net/publication/242788023_On_limitations_of_VS_Cherniavsky_on_limitations_of_artificial_intelligence_information_system_5_1980_121–126

JO  – Intelligence/sigart Bulletin – SIGART

[14] Vladimir S. Cherniavsky, On limitations of artificial intelligence, Information Systems, Volume 5, Issue 2, 1980, Pages 121-126 https://www.sciencedirect.com/science/article/abs/pii/0306437980900034

[15] L.K. Schubert, Comments on cherniavsky’s paper “On algorithmic natural language analysis and understanding”, Information Systems, Volume 4, Issue 1, 1979, Pages 57-59.

[16]https://www.mckinsey.com/~/media/mckinsey/featured%20insights/digital%20disruption/harnessing%20automation%20for%20a%20future%20that%20works/mgi-a-future-that-works_full-report.pdf

[17] Bostrom, Superintelligence, Oxford 2014.

[18] A breakaway company entitled Safe Superintelligence (SSI) was valued at $5 Billion just 3 months after launch. https://www.ft.com/content/2988c7a3-0e70-4c5d-a3f3-665c2d0c37d3

[19] https://www.cnbc.com/2019/07/22/microsoft-invests-1-billion-in-elon-musks-openai.html

[20] https://www.newyorker.com/magazine/2023/12/11/the-inside-story-of-microsofts-partnership-with-openai

[21] https://openai.com/blog/chatgpt

[22] https://www.nytimes.com/2023/01/23/business/microsoft-chatgpt-artificial-intelligence.html#:~:text=A%20year%20later%2C%20Microsoft%20invested,technologies%20OpenAI%20is%20known%20for.

[23] https://openai.com/research/gpt-4

[24] GPT-4o (2024); GPT-5 (2025)

[25] https://medium.com/@zhaosw/are-large-language-models-a-viable-path-to-artificial-general-intelligence-9756fd9f6f3b

[26] This will be looked at in more detail in Chapter 3.

[27] https://economictimes.indiatimes.com/tech/tech-bytes/elon-musk-says-ai-will-be-smarter-than-any-human-next-year/articleshow/108463055.cms?from=mdr

[28] https://www.newscientist.com/article/2420111-elon-musk-asks-court-to-decide-if-gpt-4-has-human-level-intelligence/

[29] https://www.reuters.com/legal/elon-musk-sues-openai-ceo-sam-altman-breach-contract-2024-03-01/

[30] https://www.bloomberg.com/news/articles/2024-03-01/musk-sues-openai-altman-for-breaching-firm-s-founding-mission?embedded-checkout=true

[31] https://www.irishtimes.com/business/2024/04/08/elon-musk-predicts-ai-will-overtake-human-intelligence-next-year/

[32] Ibid.

[33] https://www.bloomberg.com/news/articles/2024-03-01/musk-sues-openai-altman-for-breaching-firm-s-founding-mission?embedded-checkout=true

[34] https://www.reuters.com/legal/elon-musk-sues-openai-ceo-sam-altman-breach-contract-2024-03-01/

[35] https://www.newscientist.com/article/2420111-elon-musk-asks-court-to-decide-if-gpt-4-has-human-level-intelligence/

[36] https://www.newscientist.com/article/2420111-elon-musk-asks-court-to-decide-if-gpt-4-has-human-level-intelligence/

[37]https://www.barcouncil.org.uk/resource/new-guidance-on-generative-ai-for-the-bar.html#:~:text=Any%20use%20of%20AI%20must,to%20legal%20and%20ethical%20standards. See also the guidance issued in 2012 to the Model Rules of Professional Conduct (American Bar Association) which refers to the “benefits and risks associated with relevant technologies” and for comment see Haight, Iantha “A Rubric for Analyzing Legal Technology Using Benefit/Risk Pairs.” University of St. Thomas Law Journal, vol. 20, no. 1, Spring 2024, pp. 107-128. HeinOnline, https://heinonline-org.ucd.idm.oclc.org/HOL/P?h=hein.journals/usthomlj20&i=113 in particular where the author considers the matters rests on selecting and using technology “ethically and effectively”. (Ibid at 127)

[38] Houser, Meghan. “AI Is a Hall of Mirrors.” The New Atlantis, no. 76, 2024, pp. 68–78. JSTOR, https://www.jstor.org/stable/27297377. Accessed 2 June 2024 at 77.

Chapter 2

Artificial Intelligence and Copyright

Introduction

In Lim’s provocative piece[1] the author considers that copyright’s doctrines of authorship, originality and fair use struggle to accommodate the layered and distributed nature of AI-mediated creation. He compares its advance to the banana in the gallery duct-taped to a wall. 


[1] Daryl Lim, Banana republic: copyright law and the extractive logic of generative AI, Journal of Intellectual Property Law & Practice, 2025;, jpaf047, https://doi.org/10.1093/jiplp/jpaf047

This chapter will show that there are several different succinct issues with regards to Artificial Intelligence and Copyright. Issues like these each have a multi-jurisdictional aspect too in that local laws on the subject differ: the doctrine of fair use, for instance, available in the United States of America, is not available in every other jurisdiction. The issues in respect of Large Language Models (LLMs) and copyright infringement are, mainly, twofold: (i) the process of training LLMs with material for which a claim of copyright infringement has been made; and (ii) the output by those models to end-users when that output contains material for which a claim of copyright infringement has been made, including through a process of memorisation. The issue consequently arises whether a statistical model, which can work from upwards of 1 trillion data points, is capable of infringing copyright in circumstances where its technological output is pre-emptive.     

In respect of the end user there are copyright issues also. Principally the courts must contend with the use of Artificial Intelligence in the creative process to generate material for which a claim of copyright is made. In order to satisfy this requirement the material produced would have to constitute a “work” for the process of copyright. The issue of whether Artificial Intelligence models can produce a work will require to be determined by the court in each jurisdiction. Another hurdle for the user to cross is the extent to which the user facilitated the model in producing the material in question – for example, the number of commands and instructions given by the user, and whether the more commands given increases the likelihood that a claim of copyright can be made over the material. The jurisdictions are increasingly diverging in their treatment of this point with China already showing a willingness to permit a claim of copyright to follow from such user commands while, in the United States of America, the USPTO has been, thus far, resistant to this approach. Finally, in this respect, there are relevant variations in local law which are relevant: the United Kingdom for instance has held for many years a different approach with regards computer generated works and these will also be considered in the commentary that follows.

As a fresh technology there is as yet a dearth of cases on the subject of Artificial Intelligence. ChatGPT, after all, was released as recently as November 2022. Since that time however the principal issue which has arisen concerns copyright and other intellectual property matters.[1] One commentator remarked that LLM companies had “created an amazing edifice that’s built on a foundation of sand” – in reference to the use of copyright works in training LLMs.[2]The Economist[3] makes reference to the effect questions over copyright are having on slowing growth in the area of use of copyrighted music recordings to train large language models. Simply put the contested position of several different copyrights holders is large language models, like GPT 4, or its newer iteration GPT-4o, are given access to copyrighted material in order to train the model. This has arisen in relation to several different litigations before the courts: The New York Times case which concerns use of 3m of its articles (against AI company Open AI);[4] a case brought by Getty images concerning copying of its images (against AI company Stability AI) which was decided in favour of Stability AI in November 2025;[5] and litigation brought by Universal Music over reproduction of lyrics without permission (against AI company Anthropic)[6] as well as an action by several different music labels against Udio (settled in October 2025) and Suno accusing them of using copyrighted sounds and songs to train the artificial intelligence that powers their businesses.[7]  The chapter will also refer to sections of the new EU AI Act which refers to issues of copyright and to a Bill introduced into the United States of America Congress which would require that AI companies submit any copyrighted works in their training datasets to the Register of Copyrights before releasing new generative AI systems, which create text, images, music or video in response to users’ prompts.  

Copyright[8]

Nor are the issues confined to one or two matters: there are numerous actions now live in the courts[9], including those concerning the works of Michael Chabon, Ta-Nehisi Coates and comedian Sarah Silverman, entered in district court in California,[10] and those of John Grisham and George RR Martin, among others, entered in New York.[11] In March 2024 authors Abdi Nazemian, Brian Keene and Stewart O’Nan filed suit against both Nvidia[12] and Databricks[13] in San Francisco alleging that the AI both companies deployed were trained on a pirated digital compendium of ebooks known as Books3. A few weeks later Novelist Andre Dubus III and journalist and nonfiction writer Susan Orlean filed suit in the Northern District of California against the same entities.[14] Another law suit, against both Open AI and Microsoft, has been entered by The New York Times.[15]  That newspaper in its submissions before the court, which have been contested, included the following position:

“Defendants’ unlawful use of The Times’s work to create artificial intelligence products that compete with it threatens The Times’s ability to provide that service. Defendants’ generative artificial intelligence (“GenAI”) tools rely on large-language models (“LLMs”) that were built by copying and using millions of The Times’s copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more. While Defendants engaged in widescale copying from many sources, they gave Times content particular emphasis when building their LLMs—revealing a preference that recognizes the value of those works. Through Microsoft’s Bing Chat (recently rebranded as “Copilot”) and OpenAI’s ChatGPT, Defendants seek to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment.”[16]

In reply, OpenAI filed an extensive memorandum in which they state inter alia:

“Copyright is not a veto right over transformative technologies that leverage existing works internally—i.e., without disseminating them—to new and useful ends, thereby furthering copyright’s basic purpose without undercutting authors’ ability to sell their works in the marketplace.”[17]

In essence there appears to be two distinct arguments being made by The New York Times in respect of copyright materials: (i) that the Open AI and Microsoft large language models (Bing used a version of Chat GPT as part of the chat function on its service – which was subsequently renamed to Copilot) essentially scrapped content from The New York Times and made copies of that content before storing them on their own servers thus violating copyright. That material, which was made available to train the large language models, violated the copyright of The New York Times as it wasn’t licensed and consequently falls foul of the right to reproduction; and (ii) that those models then output that material to users of their service in response to questions posed within the framework of application of their models.  This is centred on the contention by The New York Times that it is essentially competing with the large language models in the market place and this sort of reproduction of their materials constitutes a “free-ride on The Times’s massive investment in its journalism” and undermined its business model. 

In an interview with Harvard Law Today, published online,[18] Mason Kortz, a clinical instructor at the Harvard Law School Cyberlaw Clinic at the Berkman Klein Center for Internet & Society answered questions in respect of this case. His view, when asked, is that in respect of the training element “it is pretty clear that they created copies. OpenAI and Microsoft are likely going to say, yes, there was literal copying, but it was not infringement because it qualifies as fair use.”[19] This is a concept known to United States law. 

In respect of the use of those materials in responses generated by the large language models to its users Kortz feels the matter is more nuanced and describes this claim as a “pretty novel theory”.[20] This centres on discussion that has taken place over how you classify a statistical model as a “work” for copyright purposes. Remember, the output of the large language models is a statistical model as it is basically a very large set of statistics with many multiples of data points. The question whether the output of such a model constitutes a “work” is a novel one. The result may turn on whether such an output does not form part of copyright at all – as it’s a set of facts and not expressions; or, whether, in fact, the output constitutes a derivative work because it is derived from copyrighted works. 

Guadamuz considers the issue.[21] He points out, as above, that there are two separate issues: the process of training of the model and the output generated – which he describes as inputs and outputs. He notes that the explosion in sophistication of these tools has come about mainly owing to the availability of large training datasets. The best source of data today is the internet and material can be scrapped from websites – an issue that raises legality issues. Open AI was said to have trained its original large language model GPT 3 using mostly online sources, the vast majority from web crawl searches, some coming from books, but some coming from other natural curated sources such as Wikipedia.[22] The author also says that while “Open AI does not specify it, some of these sources were collected by Open AI itself, but in other instances they used datasets created by others, such as the case of 16% of the training data coming from independent sources, particularly the book data, named Books1(12 billion tokens) and Books2 (55 billion tokens).[23] There have been reports that Large Language Models are now being trained using an experiment involving so-called synthetic data – data generated by another Artificial Intelligence. A successful roll-out should reduce the consumption of copyrightable data it is trained on.[24]

As regards the outputs of generative AI the author explains that:

“The main idea behind creative AI is to train a system in a way that it can generate outputs that statistically resemble their training data. In other words, to generate poetry, you train the AI with poetry; if you want it to generate paintings, you train it with paintings.”[25]

This statistical element is very important. For the author notes:

“But the takeaway is that models do not contain copies of works. For legal purposes, they are not even derivatives of one specific work in the data- set; the idea of large datasets is precisely that a model is not based on one individual work. This is evidenced by the size of trained models. While some are relatively large, they run in gigabytes, not tera or petabytes, and cannot possibly contain all the works in the training data – they are mostly statistical data.”[26]

The author continues as regards inputs and copyright:

“The question arises as to whether the use of data in the training of such models[27] is infringing copyright. As described above, data collection almost certainly will require making a copy of the data, be it in the shape of text, images, music, paintings, portraits, etc. From a technical perspective, whichever method one is using to train and teach the AI to do something, this will require accessing and reading the data. This will be stored and then analysed, often repeatedly, to extract information, produce statistical analysis, and produce outputs, all depending on the model.”[28]

Consequently, the authors accepts, “a copy of the data must be available in some form for preparation and data extraction. So, there could be infringement if this reproduction is unauthorised.”[29] In those cases, of a prima facieinfringement, the author looks to whether an exception may arise. He anticipates that a defendant in a future case in court may argue that any infringement was merely “transitory” as it was based on a temporary copy[30], but, he accepts that such a use may not be “incidental” and consequently would still violate copyright. The copy made would be considered part of a technological process but it may not be considered as a lawful use. Further, he states, one could argue that the resulting model does have economic significance.[31] He points to a challenge:

“In the case of text, the value is the analysis of billions of token and what matters is not which specific work is present, but that the number of works is large and varied. This highlights a fundamental challenge in copyright law concerning AU: discerning the individual value of works in a vast dataset versus the collective value extracted from the aggregation of these works.”[32]

As regards output the author considers the question is slightly easier. In circumstances where fair use is accepted at the input stage, in the United States, then the question will turn to output. He states that for an infringement at the output stage three requirements need to be met:

This issue, the question of output, brings us back to The New York Times case, as, there is another aspect to the output mentioned in that case. The issue is: what happens when a user of the large language model specifically asks the model to display a given article from The New York Times. The result produced is described as a memorisation[36] and Kortz felt this might be covered by fair use. This is likely to be contested by The New York Times, however, as, in its submissions, it refers specifically to the usurpation of “specific commercial opportunities of the Times.[37] Open AI in its submissions in response denied this.[38]

The outcome of this litigation is potentially pivotal to how large language models develop in the market place as a finding of a violation is likely to result in substantial damages and could potentially impact how other copyright cases develop elsewhere in the courts. Ultimately, several findings against Open AI, where, for instance, a court rules that there had been a copyright infringement in how the model has been trained could even impact on how the model itself is trained in future – provided of course any claimed  exception is not accepted by the Court in the United States.  

The dispute pits two powerful entities in the world of information provision against each other. The New York Times, with 10.36 million subscribers at the end of 2023,[39] is one of the world’s best performing publishing models in an era where newspapers have come under pressure. Open AI, however, is potentially larger, it is the “fastest growing consumer app ever”[40] and was described as having “1.7 billion visits” on its 1-year anniversary in 2023.[41] How this dispute plays out could have tremendous repercussions for this nascent industry. Nor can we rule out an appeal by either party, and, possibly, a Supreme Court ruling on the matter further down the line. The law suit by The New York Times was followed by an action taken against Open AI by other newspapers including the owners of the New York Daily Newsand Chicago Tribune:[42] Open AI filed a motion to consolidate these proceedings with The New York Times in June 2024.[43] Internet giant Google has also been sued for alleged copyright infringement.[44]

It’s worth mentioning too the introduction of the Generative AI Copyright Disclosure Act[45] into the US Congress earlier in 2024. This Bill, if enacted would require that AI companies submit any copyrighted works in their training datasets to the Register of Copyrights before releasing new generative AI systems, which create text, images, music or video in response to users’ prompts. The Bill sets a time limit of at least 30 days for filing of such documents prior to release of the product. The Bill was introduced by California Democratic congressman Adam Schiff and reportedly has the backing of numerous entertainment industry organisations and unions.[46]

Of course these developments may, or may not, impact on the development of the same issues in Ireland, and, more widely in the European Union. One author[47] considers that if the human author requirement persists in EU law works created utilizing AI may fall into the public domain. Interestingly, the article presciently anticipates future development when it states:

“The concept of ‘AI-generated works’ is discussed in this piece on the basis that (at the time of writing), we do not yet have the technology whereby an AI can fully autonomously generate artistic or musical works with zero human involvement. At present, some type of human involvement is required in the AI creative process.”[48]

It’s worth pointing out that the EU institutions also specifically refer to the issue of copyright in the recitals to the EU AI Act.[49] It states clearly:

“General purpose models, in particular large generative models, capable of generating text, images, and other content, present unique innovation opportunities but also challenges to artists,[50] authors, and other creators and the way their creative content is created, distributed, used and consumed. The development and training of such models require access to vast amounts of text, images, videos, and other data. Text and data mining techniques may be used extensively in this context for the retrieval and analysis of such content, which may be protected by copyright and related rights. Any use of copyright protected content requires the authorization of the rightholder concerned unless relevant copyright exceptions and limitations apply. Directive EU 2019/790 introduced exceptions and limitations allowing reproductions and extractions of works or other subject matter, for the purposes of text and data mining, under certain conditions. Under these rules, rightholders may choose to reserve their rights over their works or other subject matter to prevent text and data mining, unless this is done for the purposes of scientific research. Where the rights to opt out has been expressly reserved in an appropriate manner, providers of general-purpose AI models need to obtain an authorisation from rightholder if they want to carry out text and data mining over such works.”[51]

Jani McCutcheon posits the view in her article The Vanishing Author in Computer- Generated Works: A Critical Analysis of Recent Australian Case Law  that copyright protection for works which are substantially shaped by software, such that they lack a human author, should not be denied copyright solely because they were computer generated – if they are otherwise original.[52] In a follow-up[53] she put forward three possibilities for reform: 

  1. Retain computer-generated works as ‘works’, and fictionalise an author[54];
  2. Classify computer-generated materials as subject matter other than works; or 
  3. Provide sui generis protection.

Universal Music, Concord Music Group and ABKRO have brought preliminary injunction proceedings against AI company Anthropic in Nashville Tennessee though this was transferred to the northern district of California in June 2024.[55] Anthropic runs an LLM called Claude. In its claim Universal and others claim that Anthropic scrapes their songs without permission and then uses them to generate “identical or nearly identical copies of those lyrics”.  The claims states that there has been no request for licensing made by Anthropic and that copyrighted material is not free to be taken simply because it is easily accessible. Anthropic was founded in 2021 by a group of researchers that left rival Open AI and has drawn investment from Amazon and Google.[56]

In a statement of opposition to the law suit[57] Anthropic claimed the “attack on this new category of digital tools misconceives the technology and the law alike.”[58] It asked the court to dismiss the motion against it as a preliminary injunction motion was the wrong forum for such proceedings. 

“Existing song lyrics are not among the outputs that typical Anthropic users request from Claude. (…) There would be no reason to: song lyrics are available from a slew of freely accessible websites. (…) ¶ Normal people would not use one of the world’s most powerful and cutting-edge generative AI tools to show them what they could more reliably and quickly access using ubiquitous web browsers. Doing so would violate Anthropic’s Terms of Service, which prohibit the use of Claude to attempt to elicit content that would infringe third-party intellectual property rights.”[59]

 On the issue of training the LLM the court filing states:

“Anthropic does not seek out song lyrics in particular and does not deliberately assign any greater weight to them than to any other text collected from the web.  But like other generative AI platforms, Anthropic does use data broadly assembled from the publicly available Internet, including through datasets compiled by third party non-profits for the research community. In practice, there is no other way to amass a training corpus with the scale and diversity necessary to train a complex LLM with a broad understanding of human language and the world in general. Any inclusion of Plaintiffs’ song lyrics—or other content reflected in those datasets—would simply be a byproduct of the only viable approach to solving that technical challenge. All told, song lyrics constitute a minuscule fraction of Claude’s training data, and the 500 works-in-suit constitute a minuscule fraction of that minuscule fraction.”[60]

On the issue of output the filing states:

“Just because certain content was part of Claude’s training data set does not, however, mean that an end user can access it. Claude is, after all, a generative AI system. It is designed to generate novel content, not simply regurgitate verbatim the texts from which it learned language. While it does on occasion happen that the model’s output may reproduce certain content—particularly texts that escaped deduplication efforts when preparing the training set—as a general matter, outputting verbatim material portions of training data is an unintended occurrence with generative AI platforms, not a desired result.”[61]

The defendants argued that the use of the Plaintiff lyrics constitutes a transformative use because the challenged use adds “a further purpose or different character” than the original works – thus pointing in favour of the application of the fair use doctrine. 

“Using Plaintiffs’ copyrighted song lyrics as part of a multi-trillion token dataset to train a generative AI model about the world and how language works is the very definition of “transformative” under the fair use doctrine. The lyrics are literally transformed, in that they are broken down into small tokens used to derive statistical weights, rather than stored as intact copies.”[62]

Further the defendants argue that Anthropic includes Plaintiffs’ works in the corpus to teach its AI models to recognize language patterns, not to appropriate the songs’ creative elements. It also argued it had not taken any more of the copyrighted works than necessary stating that this was within permissible limits when the allegedly infringing work serves a different purpose from the original. Finally the defendant argued that Anthropic’s use of Plintiffs lyrics to train Claude does not harm any cognizable market. 

“It makes no sense to suggest that someone who might have paid licensing fees for the kinds of uses Plaintiffs legitimately exploit—displaying their song lyrics on third-party websites or as part of karaoke videos—will decline to do so because Anthropic used the songs to train a generative AI model.”[63]

In a filing in reply by Universal in 2024[64] it claimed that the defendant does not dispute that it copied published lyrics on a massive scale to train Claude and rests its opposition on three narratives including that its own training data specifically indicates it expected its AI model to respond to requests for the lyrics:

“Anthropic downplays its wholesale theft of Publishers’ lyrics by claiming that its AI models are “not designed to output copyrighted material,” that “[n]ormal people” would not seek lyrics from its models, and that infringing output is a “‘bug,’ not a ‘feature.’(…) Those statements are categorically false. Anthropic’s own training data makes clear that it expected its AI models to respond to requests for Publishers’ lyrics. In fact, Anthropic trained its models on prompts such as “What are the lyrics to American Pie by Don McLean?” Given this, it is astonishing that Anthropic represents that its models were not intended to respond to such requests.”[65]

The Plaintiffs argued that the defendants exploitation of the publishers lyrics for AI training do not constitute a fair use claiming that there was no need to train the AI on the entire works:

“Anthropic does not need to copy Publishers’ artistic expression in its entirety to achieve its claimed purpose. Anthropic protests that Publishers’ lyrics are a tiny fraction of its training data; it could easily exclude those lyrics and retain the remaining “trillions of tokens of pre-existing text” it allegedly requires.”[66]

The Plaintiffs also addressed the defendants argument that its use was transformative and stated that its use was instead commercial: “Anthropic’s purpose was to build an AI model that could respond to lyrics requests, often with verbatim copies of Publishers’ lyrics or derivative works excluded from fair use’s ambit.”[67] The Plaintiffs denied that there was any kind of transformative use anyway stating that the secondary use was so similar to the typical use that a compelling justification would be needed – and such did not exist. Also another company, Sony music issued a warning to over 700 AI companies not to train models using its data without express permission.[68] There is no doubt that AI has been disruptive to the music industry with reports even of a new (AI-generated) song by a country superstar who hasn’t otherwise released a song since a stroke in 2013.[69] One source even proposes legislative intervention at the stage of AI-generated output by establishing an AI-royalty fund.[70]

In June 2024 music labels including Universal music, Sony and Warner filed suit against AI-companies Udio and Suno arguing breach of copyright in the defendant’s use of copyrighted sounds and songs to train its models. Both AI companies allow users to create songs almost instantly by submitting a text command. The plaintiffs argued that the songs produced by the AI model were only possible because the systems were trained on reams of intellectual property that the plaintiffs own. Both AI companies defended their company.[71]The action between Universal and Udio was settled resulting in a collaboration between the two firms.[1]


[1] https://www.reuters.com/business/media-telecom/universal-music-settles-copyright-dispute-with-ai-firm-udio-2025-10-30/

The issue of generative AI and copyright has also appeared in other ways. In Getty v Stability AI[72] the question for the UK Court was whether training and/or development of a model would constitute an infringement under English law if this process did not take place within the jurisdiction, meaning, the place where the model was trained is indicative of whether a court will make a finding of a violation of copyright. The Getty case has also been filed in Delaware.[73]

The judgment, in the UK, insofar as it had been determined prior to the matter going for trial, left open the possibility that classic copyright principles may be departed from in that a secondary copyright infringement of importing, possessing or dealing with an infringement copy (copy made available from the model’s training data) may be made out insofar as the information was available via software through a website.[74] This type of infringement, normally reserved for tangible objects, would constitute a novel finding.[75]

The facts are that Getty Images claimed that Stability AI “scraped” millions of images from Getty’s website and used those images unlawfully as input to train and develop its deep-learning AI model called Stable Diffusion and that its output was itself infringing in that it reproduced a substantial number of Getty’s copyright works or bore Getty’s trademark. Getty claimed copyright infringement, database right infringement, trade mark infringement and passing off. In November 2025 the case was decided in favour of Stability AI with the court making no finding of a copyright infringement. The significance of the case is limited in that during the proceedings Getty was forced to drop the main plank of its case owing to a failure to provide proofs that the alleged infringement occurred in the UK.[76] An article in New Law Journal considered the matter “a damp squib” as it set no valuable precedent of any kind.[1]


[1] Miller, Potential landmark case protecting human creativity fizzles out with no precedent set, 175 NLJ 8138, p4 (1).

Scannell says:

“AI authorship is an issue because there is no agreement on whether AI-generated works can be protected by copyright. While technology is not at a stage in which AIs can generate works autonomously,[77] it is at a stage where human involvement can be minimized. It is not clear as to how much we should consider AI as merely a tool used by humans, or whether the AI tail is wagging the dog of human creativity. A human author is a common requirement for copyright protection. Yet, it is not clear if EU Member States’ copyright law would consider a work to be generated by a human when certain AI technologies are used”.[78]

It was reported in The New York Times that Getty and other are now pursuing Artificial Intelligence systems that draw only on licensed content – thus sidestepping any copyright infringement issues.[79]

One source helpfully points out that differences exist between relevant copyright provisions in the United Kingdom and the United States of America.[80] In a case the United States Copyright Office (USCO) denied an author copyright protection on the basis that images in a comic book “Zarya of the Dawn”[81] had been obtained via an AI platform which had generated the images following prompts by the user. The copyright in the work was rejected on the basis that it lacked “human authorship”. 

In another United States copyright decision in a case involving the title “A Recent Entrance to Paradise”[82] an application for copyright of a two-dimensional artwork was created by a computer algorithm running on a machine. His request to have the work registered was refused on the basis that the Work “lacked the required human authorship necessary to sustain a claim in copyright” and because the applicant had “provided no evidence on sufficient creative input or intervention by a human author in the Work”. The Office also stated it would not:

“abandon its longstanding interpretation of the Copyright Act, Supreme Court, and lower court judicial precedent that a work meets the legal and formal requirements of copyright protection only if it is created by a human author.

On a review the Board stated unequivocally that copyright law only protects “the fruits of intellectual labour” that “are founded in the creative powers of the [human] mind”[83] As the applicant did not assert that the Work was created with contribution from a human author the remaining issue for the Board was whether the human authorship requirement is unconstitutional and unsupported by case law – as argued by the applicant.

The Board noted that Courts interpreting the Copyright Act, including the Supreme Court, have uniformly limited copyright protection to creations of human authors. The example was given of Burrow-Giles Lithographic Co v Sarony[84] where the argument was rejected that a photograph (depicting Oscar Wilde called Oscar Wilde, No. 18) could not be protected by copyright as the law only protected an “author or authors” and, it was argued, “a photograph is not a writing nor the production of an author.”  This was rejected by the Supreme Court stating that :

“We entertain no doubt that the Constitution is broad enough to cover an act authorizing copyright of photographs, so far as they are representatives of original intellectual conceptions of the author.”[85]

In its Opinion the Court refers to “the exclusive right of a man to the production of his own genius or intellect.”[86] In Mazer v Stein[87] the Supreme Court cited its own decision in Burrow-Giles and stated: 

“They must be original, that is, the author’s tangible expression of his ideas. Compare Burrow-Giles Lithographic Co. v. Sarony, 111 U. S. 53, 111 U. S. 59-60. Such expression, whether meticulously delineating the model or mental image or conveying the meaning by modernistic form or color, is copyrightable.”[88]

And in Goldstein v California,[89] a case involving a conviction for committing acts of “record piracy” the applicants challenged the relevant California statute proscribing such practices as violative of Copyright. This argument was rejected[90] in the course of which the Court stated, citing Burrow-Giles

While an “author” may be viewed as an individual who writes an original composition, the term, in its constitutional sense, has been construed to mean an “originator,” “he to whom anything owes its origin.” Burrow-Giles Lithographic Co. v. Sarony”.

Having referred to the relevant caselaw the board noted it was “compelled to follow Supreme Court precedent, which makes human authorship an essential element of copyright protection.”[91]

Non-human creations have been rejected by lower courts in the United States of America. In Urantia Found v Kristen Maaherra[92] the ninth Circuit rejected a claim that a book which contained words “authored by non-human spiritual beings” could establish copyright unless there had been “human selection and arrangement of the revelations”. A monkey cannot author a photograph, see Naruto v Slater, discussed further down.[93] Copyright in a “living garden” was rejected as “authorship is an entirely human endeavour” and “a garden owes most of its form and appearance to natural forces.”[94] And in Satava v Lowry[95] depictions of jellyfish were not protected by copyright as the material “first expressed by nature are the common heritage of humankind, and no artist may use copyright law to prevent others from depicting them.”

The Board acknowledged that it was not aware of any United States court decision where it was considered whether Artificial Intelligence can be the author for copyright purposes and found the applicant’s argument in this respect “unavailing”[96] and rejected it. 

In another decision, in 2023, concerning a refusal to register an AI generated two-dimensional artwork Théâtre D’opéra Spatial[97]  the Board likewise rejected registration on the basis the content contained “more than a de minimis amount of content generated by Artificial intelligence.”[98] The image was the first AI-generated image to win the 2022 Colorado State Fair’s annual fine art competition. And the Board had been aware of the image and aware that it was an AI generated image though this had not been disclosed in the application made to it.[99] The applicant had used the AI application Midjourney a text-to-picture AI service in the creation of the work. The applicant argued that  he had inputted “numerous revisions and text prompts at least 624 times to arrive at the initial version of the image.” Rejecting the applicants contention the finding was made that “features generated by Midjourney and Gigapixel AI must be excluded as non-human authorship”.[100]

The applicant contended that this position was incorrect as  “the underlying AI-generated work merely constitutes raw material” which the applicant has “transformed through his artistic contributions.” Therefore, he contended, “regardless of whether the underlying AI-generated work is eligible for copyright registration, the entire Work in the form submitted to the copyright office should be accepted for registration.”[101] The Board rejected this argument:

“After carefully examining the Work and considering the arguments made in the First and Second Requests, the Board finds that the Work contains more than a de minimis amount of AI-generated content, which must be disclaimed in an application for registration. Because Mr. Allen has refused to disclaim the material produced by AI, the Work cannot be registered as submitted.”

The board referred to its AI Registration Guidance,[102]  and found that “if all of a work’s ‘traditional elements of authorship’ were produced by a machine, the work lacks human authorship, and the Office will not register it. If, however, a work containing AI-generated material also contains sufficient human authorship to support a claim to copyright, then the Office will register the human’s contributions.[103] In such cases, the applicant must disclose AI-generated content that is “more than de minimis.”[104]

This de minimis standard was re-emphasised by the United States Copyright Office in a virtual event in June 2023.[105]The standard had been taken from the Supreme Court decision in Feist v. Rural Telephone[106] and had been included in the USCO’s Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence[107]That Report also set down the following:

“When an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology – not the human user.”[108]

The issue likewise arises in the context of video-gaming. Ying Ye in a piece[1] considers AI generated content in video game creation should still satisfy the originality requirements for copyright in each of the jurisdictions he considers:

“Within the specific context of video games, AIGC [AI Generated Content] does not exist in isolation but serves as an integral component of the overall gaming experience. Game developers provide a creative foundation and directional guidance for AIGC generation by designing core elements such as game rules, narrative structures and artistic styles. More importantly, the generation process remains human inputs at several stages: from the selection of training data and the adjustment of algorithmic parameters to the optimization of output results each step reflects human creative choices and intellectual contributions. Therefore, AIGC in video games meets the originality standards required for copyright protection in jurisdictions such as the USA, China, UK and EU.”[2]


[1] Ying Ye, The copyright protection of AI-generated content in video games, Journal of Intellectual Property Law & Practice, 2025;, jpaf081, https://doi.org/10.1093/jiplp/jpaf081

[2] Ibid

In another case, refusal to register Suryast[109] the Board likewise refused to register a two-dimensional image which had been presented for registration as having two authors: himself as the author of “photograph, 2-D artwork” and “RAGHAV Artificial Intelligence Painting App” (“RAGHAV”) as the author of “2-D artwork.”[110] Following a request for more information on the extent of the input of the AI application in the creation of the image – in order to determine the de minimus threshold – the applicant Mr Sahni submitted a 17-page document describing how RAGHAV’s technology functions and how he used the technology to create the Work.[111] He  generated the Work by taking an original photograph that he authored, inputting that photograph into RAGHAV, then inputting a copy of Vincent van Gogh’s The Starry Night into RAGHAV as the “style” input to be applied to the photograph, and choosing “a variable value determining the amount of style transfer.”[112] The Board found that “the Work does not contain sufficient human authorship necessary to sustain a claim to copyright.”[113] The board stated that “if all of a work’s ‘traditional elements of authorship’ are generated by AI, the work lacks human authorship, and the Office will not register it.”[114]

“After considering the information provided by Mr. Sahni regarding his creation of the Work, including his description of RAGHAV, the Board concludes that the Work is not the product of human authorship. Specifically, the Board finds that the expressive elements of pictorial authorship were not provided by Mr. Sahni. As Mr. Sahni admits, he provided three inputs to RAGHAV: a base image, a style image, and a “variable value determining the amount of style transfer.” Sahni AI Description at 11. Because Mr. Sahni only provided these three inputs to RAHGAV, the RAGHAV app, not Mr. Sahni, was responsible for determining how to interpolate the base and style images in accordance with the style transfer value.”[115]

In point of fact, of these four decisions, Zarya of the Dawn, has been indicated as a partial success.[116] This is because while the original decision to issue a registration for the title had been subsequently cancelled, owing to concerns over the “randomly generated noise that evolves into a final image,”[117]and, owing to the fact that the role of Artificial Intelligence tool Midjourney had not be sufficiently disclosed to the Office, the USCO did acknowledge that the text of the graphic novel “as well as the selection, coordination, and arrangement of the Work’s written and visual elements” are protectable under copyright law.[118]

“After the registration was approved, the Office became aware of public statements and online articles in which you discuss the creation of Zarya Of The Dawn. After reviewing these statements, the Office now understands that “Midjourney” is an artificial intelligence tool you used to create some or all of the material contained in the work. In those public statements, you claim that your reliance on this artificial intelligence tool was clearly disclosed in your application. However, the word “Midjourney” appears only once within eighteen (18) individual files of material submitted to the Office for registration. This cryptic inclusion of the name of the tool was by no means an obvious or clear indication that you may not have created some or all of the material included in this work—contrary to the information you provided in your application. Had you included such a clear statement in an appropriate space on the application, the Registration Specialist would have corresponded with you to determine if this work was created by a human author, and if so, to clarify the appropriate scope of your claim. The fact that the word “Midjourney” appears on the cover page of a Work does not constitute notice to the Office that an AI tool created some or all of the Work.”[119]

In the result the original certificate issued to the applicant was cancelled and a new one issued, along with an update to the public record, to “briefly explain that the cancelled registration was replaced with the new, more limited registration.”[120]

Under United Kingdom law, however, the situation is covered by Section 178[121] of the Copyright, Designs and Patents Act 1988, (CDPA) which states:

“computer-generated”, in relation to a work, means that the work is generated by computer in circumstances such that there is no human author of the work;

 The authorship of such a work is developed in Section 9(3):[122]

“In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”[123]

Consequently, we may expect different findings in the two Getty Images cases – the UK case[124] and the Delaware case.[125] This is because under United States law the computer, as presently, cannot be the author, and no authorship exists for a computer-generated work, unless there has been human authorship, while, under United Kingdom law the relevant provisions clearly contemplate a situation where the computer has taken instruction from a person – and copyright may vest in that person. One source believes that the author of “Zarya of the Dawn” could have availed of copyright protection in the UK.[126]

This scenario may be an example of where the UK approach is more suited to the Artificial Intelligence environment and has been described as “at the forefront of innovation-promotion and protection of creative works, given that for over 30 years the Copyright, Designs and Patents Act 1988 has afforded computer generated works copyright protection.[127] However, the authors continue, intriguing, as follows:

“[O]n the 10 May 2023, the House of Commons Science, Innovation and Technology Select Committee held an evidence session in the UK parliament on the impact of AI in the creative industry. Expert witness evidence from the session suggested that the CDPA’s approach to “computer generated” works is no longer appropriate as AI is less of a tool that aids in the creation of works, but rather, is what creates the works. A witness specified that this is particularly apparent where minimal input from the user of the AI is required to generate the works…”[128]

In other words even with forward-thinking provisions in the 1988 Act the United Kingdom, like every other jurisdiction, will have to re-think it’s approach –owing to the capabilities of Artificial Intelligence. The Italian legislature, for instance, has introduced a Bill which would set a de minimis standard: “AI generated works can be protected only when some creative and relevant intervention by humans is demonstrable”.[129]  It’s worth noting that copyrights holders are re-thinking their approach as well: Getty images, for instance, has launched its own generative AI trained only on Getty’s own library.[130] Other publishing houses have gone a different route and entered into licensing arrangements with LLM providers – though the money on offer is described as “anaemic”.[131]

Still, it’s worth taking another look at the McCutcheon proposals for reform. First, she considers whether we should retain computer-generated works as ‘works’, and fictionalise an author. This proposal would mean simply accepting that, for instance, statistical large language models produce ‘works’ – a finding still to be determined – and that, further, those works are attributable to a fictionalised person pursuant to the deeming provision in Section 9(3) of the CDPA given above, where such applies. We must consider that McCutcheon’s position, written in 2013, predated by some 10 years the appearance on the market of large language models. Using more than one trillion data points these models produce answers from a repository surely hitherto unmatched and will require reassessment. 

The courts will have to decide two issues:

  1. whether what is produced can constitute copyright at all – whether the output is a set of facts, for instance; and
  2.  If the answer to 1 above is yes, even though the output itself is a set of facts, whether the output constitutes a derivative work as it’s derived from a copyrighted work which was itself acquired from training data – which itself is subject to a copyright claim[132] and which may turn on the jurisdiction the model was trained in.[133]

Aligned with this position is the fact that not every jurisdiction has a deeming provision in respect of computer generated works: only those which followed the United Kingdom position in its Copyright, Designs and Patents Act 1988, section 9(3). Having reviewed that provision McCutcheon concludes that implementation of that provision constitutes a “workable solution” – albeit generating some surmountable issues.[134]

The second McCutcheon proposal is to classify computer-generated materials as subject matter other than works. The author gives the example of authorless “subject matter” and puts forward the following proposed definition:

“’computer-generated’, in relation to computer-generated material, means that the material is generated by computer in circumstances such that there is no identifiable human author of the material.”[135]

A deeming provision, like that at Section 9(3) above, would be required:

“In the case of computer-generated material, the author shall be taken to be the person by whom the arrangements necessary for the creation of the material are undertaken.”

The word ‘author’ could be substituted by ‘maker’ to clarify that the relevant person is not an author capable of enjoying moral rights. Interestingly, the Copyright Law Review Committee of Australia (CLRC) considered the “investor or owner of the computer/computer program”  as the relevant person and deserving owner.[136] This didn’t sit well with the author who considered, rightly, that “it is particularly uncertain whether a programmer should be classified as the maker”.[137]

Option 3 is conceptually more straightforward and may find a relatively receptive ear in the European Union. This option would be required where a court will not accept any position which indicates that anything other than human authorship is required to secure copyright protection – the current position in the United States of America and the European Union. Option 3 would protect computer generated material as part of a sui generis legislative model – modelled on the comparable sui generis database rights pursuant to the Database Directive.[138] This option would not come without issues though: actions, such as that brought by The New York Times, will first need to clarify whether the output of the model is itself in breach of copyright. One author He sounds a note of caution advising the judiciary not to set the rules in this space and to wait instead for a legislature that has consulted with stakeholders:

“Perfectly regulating AI-generated content (AIGC) may be beyond the judiciary’s capacity, as the solutions are provided within an ill-suited framework. It would be preferable for legislators to engage in thorough discussions with stakeholders to develop a considered regulatory plan first, which does not necessarily have to revolve around copyright.”[139]

The author considers copyright laws in the United States and China and concludes that, despite the decision in Liu there is still a common view in both jurisdictions that AI cannot hold copyright: only humans can. The uniqueness of the Liu decision lies in its recognition of copyright protection for pictorial content created on the basis of literal prompts fed into Stable Diffusion by a user, marking that user as the owner. The ruling, consequently:

“[S]parks new debates regarding the creative link between literal prompts and the resultant pictural content, as well as the theoretical justification and practical desirability of such a ruling.”[140]

The Beijing Internet Court in Liu considered that the Plaintiffs conceptualisation stage to the final selection of the image involved significant intellectual input encompassing designing the characters presentation, selecting  and arranging prompt words, setting parameters and choosing the final image that met the expectations. This image was described as thus mirroring “the plaintiff’s intellectual investment and qualifies as an ‘intellectual achievement’”.[141]  The author refers to the United States decision in Zarya of the Dawn as an example that went the other way. In that case the AI tool Midjourney did not permit users sufficient control over generated images to be treated as the “master mind” behind them. Arguably, Théâtre D’opéra Spatial another case concerning Midjourney, goes even further in that the number of user inputs in that decision was 624 – yet, still the outcome was negative and the author was a non-human.

The author notes that in Liu the Court recognised that the plaintiff did not personally draw the specific line, nor did they instruct the model on how to draw those specific lines and colours. Consequently, he argues, the lines and colours were essentially drawn by the model – significantly different from the traditional method of using paintbrushes or graphic design software for drawing.[142] Still, despite this finding, the Court in Liu made a finding that the picture was independently completed by the plaintiff and reflected personalised expression.[143] This conclusion followed from the plaintiff’s inputting of prompt words and the setting of layout and composition through parameters indicative of the plaintiffs choices and arrangements.[144] The plaintiff continued to add prompt words as part of a process of adjustment and correction to reflect the plaintiff’s aesthetic choices and personal judgments. This position, the value of prompts in the creative process, differs to that found in the United States Copyright Office Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence already mentioned.

The author also considers that the ratio of both judgments may rest on their respective view of the connection between the prompts and the generated content: where the user cannot demonstrate sufficient creative connection between the prompts and the generated content then the user cannot claim copyright protection. Where a tool produces a result more out of luck than instruction this would be consequently insufficient.

“Accordingly, if the prompts can only leader to one result in a certain AI platform, it will be deemed mechanical and receive no copyright protection; but if the prompts cannot exert sufficient control over the generated pictures, the user cannot claim copyright protection over the pictures due to the insufficient creative connection between the prompts and the generated content.”[145]

Interestingly, Professor Cui Guobin is cited by the author as suggesting that:

“[T]he users may not enjoy copyright protection over the AI-generated pictures rendered in the ‘first round’, but it is possible that they can enjoy copyright protection over the pictures due to their personal choices and adjustments, as demonstrated in the subsequent rounds of editing and fine-tuning.”[146]

Another issue mentioned is related to that of speed:

“The potential impact of generative AI on the creative market is more about speed than quality. It is true that technological advancements like AI have gradually freed us from mundane task. But whether we should protect them with copyright is another question. (…) [G]enerative AI is transforming the side of creative tools, reducing the intellectual and labour investment required for tasks like painting, writing and coding. This democratization of creative tools and knowledge, while stimulating overall creativity, also lowers the barrier to acquiring a broad range of skills that traditionally required extensive training.”[147]

The author notes this trend is not new – camera’s also produce images instantly, for example. However AIs, he states, “are more radical in their ability to produce outcomes of acceptable quality quickly for various creative industries.”[148]

Interestingly, on the point of human intervention, In 2020, admittedly a while ago by the standards of today’s technology, the United States Patent and Trade Mark Office (USPTO) issued a paper on the Public Views on Artificial Intelligence and Intellectual Property[149] in which they took account of public perception of AI. While many respondents felt we had already achieved, at that stage, so-called narrow-AI, AI confined to specific tasks, there was a majority view that AGI, in contrast, was merely “a theoretical possibility”. Having recorded that finding, the Report then went on to indicate that the majority who were surveyed considered that AI, in its current market form, can “neither invent, nor author without human intervention”.[150]

Finally, we should consider a case from China[151] on the issue. Unlike the position in the United States of America, in Zarya of the Dawn, the Court in Beijing reached a different conclusion. The case was Li v Liu.[152] In Li the plaintiff used the open-sourced software Stable Diffusion to create images through prompt-based input and subsequently shared the image on social media. The defendant used that image as an illustration for her poetry and posted it on her social media account. The plaintiff brought proceedings alleging the defendant had removed the watermark and used the image without permission thereby infringing his right of attribution. The case involved the question whether the generated image constitutes a work. Giving judgment the Court stated:

“[T]he plaintiff claimed that the choosing and selecting of models, the entering of prompts and negative prompts, and the setting of parameters can all reflect the plaintiff’s selection, choice, arrangement, and design, condense the plaintiff’s intellectual work, and clearly have originality. In particular, when seen from an objectivist standard the image involved in this case clearly confirms to the characteristics of a work.”[153]

Considering the above the court ruled that the content did constitute works. One commentator in Beijing put the matter as follows:

The judgment of the Beijing Internet Court held that the artificial intelligence generated pictures in the case reflects people’s original intellectual input, which should be recognized as works, protected by copyright law, and reflect the judicial innovation of the protection of AI painting works. The court recognizes that natural persons enjoy intellectual property rights for their use of AI painting large models to generate pictures under certain conditions, which is conducive to protecting and strengthening people’s dominant position in the development of artificial intelligence industry, encouraging people to use artificial intelligence software to create more high-quality works, and promoting the health of new technologies, new formats and new businesses.”[154]

The ruling in Li v Liu builds on the finding in China by a Court in 2019 that AI-created works can be copyrighted under Chinese law. That decision was Tencent v Yingxun[155] a case concerning an AI writing assistance software called Dreamwriter.[156] The Plaintiff used the software to collect and analyze the text structure of stock market financial articles. Based on the needs of different types of readers, the software formed the article structure according to the Plaintiff’s unique expression wishes. Then it uses the stock market data collected to complete the writing and publish the article in 2 minutes after receiving the data (that is, 2 minutes after the end of the stock market).[157] The Court held that “Dreamwriter’s articles are still original works formed in human intellectual activities.”[158]

One source sums up the situation:

“The unique characteristics of generative AI, including the self-improving nature of AI models and the difficulties associated with attributing their outputs to human creators, challenges the existing framework and necessitates a thorough rethinking of what rules will result in the greatest social value. Encouraging the creation and dissemination of such content is the main purpose of the copyright system, and allowing copyright protection for AI-generated works will achieve this purpose. Once the desirability of protecting these works is acknowledged, acknowledging AI authorship then becomes nothing more than opting for reality instead of legal fictions.”[159]

Another source agrees: saying there are challenges ahead for copyright law:

“Our basic copyright doctrines don’t fit generative AI.[160] We will struggle to apply the law to comport with the new realities. And it may suggest that copyright itself is a poor fit for the new world of AI-generated works.”[161]

This issue, around authorship, also brings to mind the interesting case of the so-called monkey-selfie[162]  which was settled out of court but where the issue arose whether a photograph taken by a monkey, but using a camera which had been setup by a photographer, was covered by copyright, and, if so, whether that copyright was held by the photographer. On the path to resolution an argument that the monkey owned copyright in the photograph was dismissed on the basis that non-human entities cannot enjoy copyright.[163] There are those who held the view that the photograph falls into the public domain.[164] While others maintained that as the photographer set up the conditions for the photo to be taken, copyright would vest in the photographer.[165] This gives some idea of the range of issues we will enter into with AI when it comes to authorship – the arguments are likely to be broadly similar: whether a person that commands an AI to produce a work owns copyright in that work or whether the copyright is held by the entity that made the AI available to the market, or, whether the AI can itself hold copyright in it. With the dawn of AGI it’s more likely that the AI will hold copyright,[166] but, like a lot of things in this area we’ll have to wait and see. In any event, I think we can all assume it would be otiose if the output of a large language model by today’s standards vested copyright in anyone other than the original copyright holder – where the model had drawn on the copyrighted works of another to generate its output.   

In Shanghai Xinchuanghua Cultural Development Co., Ltd v AI company (alias)[167] concerning the copyright law in China the Plaintiff sought orders prohibiting the defendant from generating infringing Ultraman pictures and were asked to remove such pictures from its training dataset. Ultraman was certified by the Guinness Book of World Records as the tv programme with the most derivative series. The defendant was ordered by the court to immediately cease infringement of copyright and to prevent further generation of the pictures in violation of the copyright of the Plaintiff. The defendant was ordered to pay CNY 10,000 in compensation. The request to remove the pictures from the training dataset was unsuccessful since “the defendant has not actually conducted model training.”[168]

These were not the only cases on Artificial Intelligence and copyright. In Thomson Reuters Enterprise Centre v ROSS Intelligence[169] the argument before the court in Delaware was whether there had been a copyright violation in circumstances where copyrighted headnotes from the Plaintiffs legal research database were used as training data for an AI research tool. Some of the matters were disposed of summarily including a finding that the defendant did engage in some copying of materials from the Plaintiff. The court noted that the key factors in dispute included the issue related to fair use of the data. 

“The court said that if, as ROSS contended, the AI tool only studied the language patterns in the headnotes to learn how to produce judicial opinion quotes, then it would be transformative intermediate copying (following certain intermediate copying cases cited by ROSS). But if, as Thomson Reuters alleges, ROSSS used the untransformed text of headnotes to get its AI to “replicate and reduce the creative drafting done by Westlaw’s lawyer-editors”, then those intermediate copying cases would not apply.”[170]

A subsequent decision in Delaware re-emphasised the non-availability of a fair-use defence to Ross and found in favour of Thomson Reuters in a decision[1] hailed as the first major AI copyright case.[2]


[1]https://fingfx.thomsonreuters.com/gfx/legaldocs/xmvjbznbkvr/THOMSON%20REUTERS%20ROSS%20LAWSUIT%20fair%20use.pdf

[2] https://www.wired.com/story/thomson-reuters-ai-copyright-lawsuit/

In Kadrey v Meta Platforms (now consolidated with Chabon v Meta Platforms)[171] a class-action case was filed in July 2023 involving Meta’s LLaMA LLM which the plaintiffs allege were trained on their books. Meta was successful in having all claims dismissed except the claim that alleged copyright infringement based on unauthorised copying of the plaintiff’s books. The Plaintiffs have since filed an amended complaint[172] and the matter proceeded to a three hour hearing[2] on May 1, 2025. Judgment was issued on 265h June 2025 and resulted in a Fair Use defence being upheld: A ruling seen as a blow to authors.[1]


[1] https://www.ft.com/content/6f28e62a-d97d-49a6-ac3b-6b14d532876d


[1] https://www.perkinscoie.com/en/news-insights/recent-rulings-in-ai-copyright-lawsuits-shed-some-light-but-leave-many-questions.html#:~:text=ROSS%20Intelligence%20Inc.%2C%20which%20was,was%20developed%20by%20ROSS%20Intelligence.

[2] https://www.ecjlaw.com/ecj-blog/kadrey-v-meta-the-first-major-test-of-fair-use-in-the-age-of-generative-ai-by-jason-l-haas

In Anderson v Stability AI[173] another class-action lawsuit was filed relating to three image generation tools: Stable Diffusion, Midjourney and DreamUp. Those tools produce images in response to text inputs from users. The plaintiffs claimed that the models powering those tools were trained using copyright images including those owned by the plaintiffs. A motion to dismiss was filed and a ruling issued.[174] The court dismissed most of the plaintiff’s claims with only one plaintiff’s direct copyright infringement claim surviving – against Stability AI – on the basis of searches conducted satisfactorily on haveibeentrained.com which showed to the satisfaction of the court that the images in question may well have been used as part of the training of the LLM.[175] An amended complaint has since been filed by the plaintiffs following leave of the court.[176]

The plaintiffs had alleged further that the defendants were vicariously liable for infringing derivative works created by third parties’ use of the defendant’s products.[177] This claim was dismissed by the court, on the basis, inter alia, that the plaintiff had failed to establish the LLM had the ability to control the infringing actions of third-party users and that the LLM benefited financially from those actions.[178]

Finally, before turning to consider the European context in this area it should be noted that the United States Copyright Office has issued a report on Copyright and Artificial Intelligence.[1] The report concludes that existing rules on copyright are adaptable enough to accommodate the issues presented by AI. It states:

Based on the fundamental principles of copyright, the current state of fast-evolving technology, and the information received in response to the NOI, the Copyright Office concludes that existing legal doctrines are adequate and appropriate to resolve questions of copyrightability. Copyright law has long adapted to new technology and can enable case-by-case determinations as to whether AI-generated outputs reflect sufficient human contribution to warrant copyright protection. As described above, in many circumstances these outputs will be copyrightable in whole or in part—where AI is used as a tool, and where a human has been able to determine the expressive elements they contain. Prompts alone, however, at this stage are unlikely to satisfy those requirements. The Office continues to monitor technological and legal developments to evaluate any need for a different approach. [2]


[1] https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf

[2] Ibid. 

European Context

The European position has evolved somewhat since July 2025 when a published code of practice[1] for large language models included provisions for prevention of reproduction of content subject to copyright protection. As part of the code, companies will have to commit to putting in place technical measures that prevent their models from generating content that reproduces copyrighted content.[2]


[1] https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai#:~:

[2] https://www.ft.com/content/32a3c83d-64ed-4c83-a5d3-a6cd89b087ba

One source considers the position of generative AI and intellectual property rights, with a focus on the position in Europe.[179] The authors note this area presents various legal challenges related to the “creative” outputs of models.[180] Like other sources the authors distinguish between training of models and outputs. As regards outputs they further differentiate between instances in which the models serve as mere instruments to enhance human creativity and situations in which the models operate with a significantly higher degree of autonomy.[181]

As regards training the authors consider that either the rightholders must give their permission or the law must specifically allow their use in training. 

“The extensive scale of the datasets used and, consequently, the significant number of rightholders potentially involved render it exceedingly difficult to envision the possibility that those training Large Language Models (LLMs) could seek (and obtain) an explicit license from all rightholders (…)”[182]

This issue becomes more in evidence with the practise of web scrapping techniques – a practise whose legality has continually been debated by courts and scholars in Europe,[183] even, note the authors, in terms of potential infringement of sui generis database rights,[184] already mentioned earlier. The authors mention the GPTBot[185]which may allay some of those concerns.[186]

“A potential regulatory solution to ensure the lawful use of training datasets would involve applying the text and data mining (TDM) exception provided by Directive 2019/790/EU (DSMD) to the training of LLMs.”[187]

The authors note the cases brought before the Courts on the issue in the United States in respect of potential copyright infringement related to materials used in the training phase and the authors consider that “the outcomes of such cases are not necessarily predictive of how analogous cases might be resolved in the EU – for example, in the US the fair use doctrine could be invoked, mentioned earlier.[188] In Europe, the EU Copyright Directive, may also be relevant. In its recitals it states:

“New technologies enable the automated computational analysis of information in digital form, such as text, sounds, images or data, generally known as text and data mining. Text and data mining makes the processing of large amounts of information with a view to gaining new knowledge and discovering new trends possible. Text and data mining technologies are prevalent across the digital economy; however, there is widespread acknowledgment that text and data mining can, in particular, benefit the research community and, in so doing, support innovation. Such technologies benefit universities and other research organisations, as well as cultural heritage institutions since they could also carry out research in the context of their main activities. However, in the Union such organisations and institutions are confronted with legal uncertainty as to the extent to which they can perform text and data mining of content. In certain instances, text and data mining can involve acts protected by copyright, by the sui generis database right or by both, in particular, the reproduction of works or other subject matter, the extraction of contents from a database or both which occur for example when the data are normalised in the process of text and data mining. Where no exception or limitation applies, an authorisation to undertake such acts is required from rightholders.” [189]

Article 3 and Article 4 are relevant. Article 3 of the Directive provides for an exception for reproductions and extractions made by research organisations and cultural heritage institutions in order to carry out, for the purposes of scientific research, text and data mining of works or other subject matter to which they have lawful access. Works garnered as a result of this process should be stored with an appropriate level of security and may be retained for the purposes of scientific research. Article 4 provides that Member States shall provide for an exception or limitation to the rights provided elsewhere in the directive for reproductions and extractions of lawfully accessible works for the purposes of text and data mining; these can be retained for as long as is necessary for the purposes of text and data mining. These limitations apply provided the works in question have not been “expressly reserved by their rightholders in an appropriate manner”.[190] It should be noted that issues have arisen with respect to data scrapping, not mining, where ordinarily an etiquette online permits website operators to direct webcrawlers away from their site, using a robot.txt protocol. Still, while the practice of many bots has been to respect that protocol and deflect away without reading the relevant content there have been reports of at least one Artificial Intelligence behaving aggressively in a bid to locate data to train its model.[191]

Article 17 of the Directive “constitutes a new liability regime that applies to service providers. It also incorporates a complicated exemption regime, which is likely to prove difficult to apply in practice” [192] and refers to three distinct sets of interest: those of the rightsholders, the service providers, and the individual internet user.[193]

Article 17 (4) states:

If no authorisation is granted, online content-sharing service providers shall be liable for unauthorised acts of communication to the public, including making available to the public, of copyright-protected works and other subject matter, unless the service providers demonstrate that they have:

Article 17(5) refers to the principle of proportionality that applies in respect of application of subsection 4 above and gives regard to the type, the audience and the size of the service and the type of works uploaded. The availability of suitable and effective means and their cost for service providers is also contained in the subsection. Ireland implemented the Directive by amending its Copyright and Related Rights Act 2000.[194] The Irish Regulations implement Article 17 without modification.[195]

As regards the “complex legal issue” of outputs the authors above posit that “the use of protected materials in the training of an LLM does not imply, per se, that the LLM-generated outputs infringe upon the intellectual property rights in these materials or qualify as derivative creations thereof.”[196] They state, for example, the fact that a text generated LLM shares the same style as the works of a specific author would not imply per se and infringement of the intellectual property rights of that author[197] as, in most European legal systems “the literary or artistic style of an author is not an aspect upon which an exclusive right can be claimed.”[198] They also raise the interesting claim that:

“If, by contrast, an infringement is found in an LLM output, the person prompting the LLM would first and foremost be liable because she directly brings the reproduction into existence. However, LLM developers might, ultimately, also be liable.”[199]

A regards the requirement for human authorship the authors note that while international treaties and EU law “do not explicitly state that the author or inventor must be human” there have been “various normative hints” that appear to support this conclusion.[200] In the context of copyright, for example, the work must “constitute an author’s intellectual creation.”[201]Fritz in an article posits that: “[T]he initial AI-generated work cannot be considered a work of human authorship; however, the edited work may be.”[1]


[1] Johannes Fritz, Understanding authorship in Artificial Intelligence-assisted works, Journal of Intellectual Property Law & Practice, 2025;, jpae119, https://doi.org/10.1093/jiplp/jpae119

In Painer[202] the factual matrix concerned the abduction of a 10-year-old in 1998 and her escape in 2006. The dispute centred on the use of certain newspapers of portrait photographs of the abductee taken by a freelance portrait photographer, Ms Painer, prior to the child’s disappearance. When the story broke of her escape the newspapers lacked an up-to-date photo of the abductee and republished the old portrait and generated a “photo-fit” of what she might look like in 2006. Ms Painer objected to the republication of her pictures and the publication of the photo-fit images – arguing that they were adaptations of her work.

Giving its judgment the Court of Justice of the European Union disagreed and ruled that:

In the case of portrait photographs like the contested photographs, the creator enjoys only a small degree of individual formative freedom. For that reason, the copyright protection of that photograph is accordingly narrow. Furthermore, the contested photo-fit based on the template is a new and autonomous work which is protected by copyright.[203]

The authors, earlier, return to the initial question posed and state:

“[W]hether an LLM-generated output may be eligible for protection under intellectual property law. The answer to this question is relatively straightforward when the LLM constitutes a mere instrument in the hands of a human creator, or, to put it differently, when the creative outcome is the result of predominantly human intellectual activity, albeit assisted or enhanced by an AI system. In such a scenario, the European Parliament has stressed that where an AI is used only as a tool to assist an author in the process of creation, the current IP framework remains fully applicable.[204] Indeed, as far as copyright protection is concerned, the Court of Justice of the EU has made clear in the Painer case[205] that it is certainly possible to create copyright-protected works with the aid of a machine or device.”[206]

This is not dispositive, however, for there are cases where an LLM operates in a substantially autonomous manner. The authors consider that the “mere formulation of a prompt by a human being is likely insufficient to recognise a substantial human contribution to the creative output generated by the LLM.”[207]

“The fundamental legal aspect is that a notable human contribution must be discernible not in the broader creative process, bur specifically in the resulting creative outcome.”[208]

This conclusion, say the authors, points against copyright protection for content generated by LLMs in a substantially autonomous manner – consistent with the position of the US Copyright Office mentioned earlier, and the European Patent Office.[209]

Copyright in the EU AI Act

Two provisions in the new EU AI Act, both introduced late in the negotiations, refer to issues of copyright: Art. 53 (1)(c) and Art. 53(1)(d) providers of general purpose AI models shall (1) put in place a policy to comply with Union copyright law, and in particular to identify and comply with, including through state of the art technologies, a reservation of rights expressed pursuant to Art. 4(3) of the Copyright Directive,[210] and (2) draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model.[211] The Financial Times reported that the UK Government is set to produce rules around transparency in the training process of AI Models. https://www.ft.com/content/17f4c7ee-b1bc-4bde-8e92-bebb555479a2

Article 4 of the Copyright Directive states:

Article 4

Exception or limitation for text and data mining

1.   Member States shall provide for an exception or limitation to the rights provided for in Article 5(a) and Article 7(1) of Directive 96/9/EC, Article 2 of Directive 2001/29/EC, Article 4(1)(a) and (b) of Directive 2009/24/EC and Article 15(1) of this Directive for reproductions and extractions of lawfully accessible works and other subject matter for the purposes of text and data mining.

2.   Reproductions and extractions made pursuant to paragraph 1 may be retained for as long as is necessary for the purposes of text and data mining.

3.   The exception or limitation provided for in paragraph 1 shall apply on condition that the use of works and other subject matter referred to in that paragraph has not been expressly reserved by their rightholders in an appropriate manner, such as machine-readable means in the case of content made publicly available online.

4.   This Article shall not affect the application of Article 3 of this Directive.

“[T]he turbulent legislative process and its conclusion in a three-day marathon meeting in December 2023 resulted in an innovative amalgam of copyright and the meta-regulation of AI. (…) the last-minute introduction of two specific copyright-related obligations, leads to interpretation challenges and loopholes that will haunt the AI Act for years to come. While it is certainly too early to call copyright regulation via the AI Act a failure, the immediate beneficiaries of the statute will not be authors and other copyright holders, but lawyers, on whom the AI Act bestows a myriad of intricate questions. Academics will find this great fun, and attorneys will find it a great source of income. But a law that mainly makes lawyers happy is not a good law.”[212]

Publicity Rights 

Finally, while not strictly concerning the law of copyright per se developments in the market (principally in China) concern so-called Artificial Intelligence “grief-tech” and touches upon post mortem publicity rights. Briefly put the new technology (it’s been available since about 2022) permits a person to be digitally resurrected such that loved ones of a deceased person can (for a fee) interact with a digital replica of the deceased which has been trained on the data of the deceased to resemble in mannerisms that person. The amount paid for the service depends on the extent of the likeness the family member is pursuing. One article[213] looks at this from the point of view of infringement of the deceased’s portrait right and ultimately concludes that as the service is limited to loved ones and as those persons grant permission for use of the likeness of the deceased then no legal issue consequently arises. The matter, however, could be different where a company was pursuing financial gain in the open market – perhaps in respect of a famous individual – without permission of the estate of that individual. While not mentioned in the article the case of Albert Einstein is also prescient. Einstein died in 1955 and pursuant to an article of his last will and testament he pledged his manuscripts, copyrights, publication rights and royalties to vest, ultimately, to the Hebrew University of Jerusalem – an institution he co-founded in 1918. While the famous scientist made no mention of his name and likeness in books, products or advertisements, known today as publicity rights, the Hebrew University sought to assert control over such rights when it took control of the estate of Einstein in 1982.[214] It was a feat described in one source as an example of “the new grave robbers”:[215]

“Throughout much of the world, the right of publicity ends at death, after which a person’s identity becomes generally available for public use. In the United States, however, this issue is governed by state laws, which have taken a remarkably varied approach. In New York, the right of publicity terminates at death; other states provide that the right of publicity survives death for limited terms. But in Tennessee (whose laws govern the use of Elvis Presley’s image, since he died there), Washington (home of a company that purports to own Jimi Hendrix’s right of publicity) and Indiana (where CMG Worldwide, which manages the identities of hundreds of dead people, is based), control over the identities of the dead has been secured for terms ranging from 100 years to, potentially, eternity.”

Various legislative attempts to address issues raised under this heading have also been introduced. The most notable of those is the United States of America Congress Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2023, which, among other provisions, seeks to create a property right to authorize the use of an image or likeness in a digital replica, applicable and inheritable for 70 years after the death of the individual, and created a cause of action for unauthorized production or dissemination of digital replicas. Remedies would include the greater of $5,000 per violation or actual damages.[216]

California AB 1836, passed on August 31, 2024 prohibits the use of a digital replica of a deceased personality’s voice or likeness without prior consent from the deceased personality’s representatives. Like the 2023 NO FAKES bill, the right applies for 70 years after the death of the deceased personality.[217]  

This complicated area is one which, one commentator suggests, requires monitoring and legal regulation:

“Just as ‘grief tech’ raises a lot of important ethical issues, the promise of the law is to deal with the ethical issues with a degree of certainty and most importantly to protect the rights and interests of both the living and the deceased. Matters such as the deceased’s consent to the use of their information in grief tech such as AI ‘resurrection’ require close attention either within the existing legal frameworks or otherwise. With the advent of generative AI, we now possess the ability to transform our deceased loved ones into digital entities. Whether such services are monetised or not, their primary and exclusive aim should be to support individuals in coping with the loss of loved ones and the law is to ensure that this intent is not to be compromised.”[218]

Conclusion

Aside from the issue of public safety, copyright presents, probably, the next most significant issue to arise from the roll out of Artificial Intelligence models. The internet is awash with commentary around whether LLMs are infringing copyright and whether, on one view, it may even bring about the abeyance of the technology.[219] It’s difficult to envision an adverse ruling by a court deterring these models from continuing their unrelenting advance onwards, but, just in case, the deployers are already moving into the terrain of using synthetic data – where the model learns from other models rather than from copyright materials.[220] Mantegna in an article[221] questions whether there is a risk of “overreaching copyright laws” in a bid to accommodate this new technology. She explains:

“Originally designed to incentivize creativity, copyright doctrine has been expanded in scope to cover new technological mediums. This expansion has proven to increase the complexity and uncertainty of copyright doctrine’s application—ironically leading to the stifling of innovation.”[222]

She continues:

“Further, answering the above questions in terms of AI policy requires understanding AI in the context of ethics, economics, and culture, as well as AI’s deployment in a digital society. As a technology, AI’s implementation triggers different legal fields related to innovation, such as data protection, consumer protection, and antitrust. Therefore, a holistic policy solution to the GAI problem cannot be articulated just by thinking from the copyright corner.”[223]The Financial Times deal on copyright has been quickly followed by similar deals involving outlets including TimeWordpress, and Der Spiegel.[224]  Obviously, as this chapter has noted, there are jurisdictional differences around copyright too, and exceptions to copyright infringement. The Getty case, for instance, was advanced in two different jurisdictions. First instance outcomes in a case as significant as Getty, or in TheNew York Times is likely to be appealed by one side or another, anyway, and, consequently, we may be several years away from resolution. 


[1] One source considers issues of liability more generally, in particular the practice of red-team models, or, interventions to prevent an LLM from hallucinating where it might mitigate problems arising from problematic speech like falsely accusing people of serious misconduct. The author asks whether such red-team behaviours actually present liability risk for model creators and deployers. Henderson, Hashimoto, Lemley, Where’s the liability in harmful speech? 3 J. Free Speech L. 589 (2023)

Where’s the Liability in harmful AI Speech?

[2] https://www.economist.com/business/2024/04/14/generative-ai-is-a-marvel-is-it-also-built-on-theft

[3] The Economist, March 23rd to March 29th 2024 at p.72

[4] https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf

[5] [2023] EWHC 3090 (Ch)

[6]https://fingfx.thomsonreuters.com/gfx/legaldocs/zgpokbjynpd/UNIVERSAL%20MUSIC%20ANTHROPIC%20LAWSUIT%20response.pdf

[7] https://www.nytimes.com/2024/06/25/arts/music/record-labels-ai-lawsuit-sony-universal-warner.html?searchResultPosition=6

[8] See generally: Gil Appel, Juliana Neelbauer, and David A. Schweidel, Generative AI Has an Intellectual Property Problem, Harv. Bus. R April, 2024 available at https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem

[9] See the excellent compilation by Baker Law including status updates here: https://www.bakerlaw.com/services/artificial-intelligence-ai/case-tracker-artificial-intelligence-copyrights-and-class-actions/

[10]  https://www.reuters.com/legal/litigation/authors-suing-openai-ask-california-court-block-competing-ny-cases-2024-02-09/#:~:text=Feb%209%20(Reuters)%20%2D%20A,and%20others%20in%20New%20York. Status updates are available here: https://www.bakerlaw.com/openai-chatgpt-litigation/

[11] https://www.reuters.com/legal/john-grisham-other-top-us-authors-sue-openai-over-copyrights-2023-09-20/

[12]https://www.bloomberglaw.com/public/desktop/document/NazemianetalvNVIDIACorporationDocketNo324cv01454NDCalMar082024Cou?doc_id=X4QHPD7KJBR8I8PK6AFRD0EODJ0

[13]https://www.bloomberglaw.com/public/desktop/document/ONanetalvDatabricksIncetalDocketNo324cv01451NDCalMar082024CourtDo?doc_id=X78MN4GVSRC9F4OTESFJ7KII697

[14] https://news.bloomberglaw.com/ip-law/nvidia-databricks-sued-in-latest-ai-copyright-class-actions

[15] https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf See here for status updates: https://www.bakerlaw.com/new-york-times-v-microsoft/

[16] https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf

[17]https://fingfx.thomsonreuters.com/gfx/legaldocs/byvrkxbmgpe/OPENAI%20MICROSOFT%20NEW%20YORK%20TIMES%20mtd.pdf

[18] https://hls.harvard.edu/today/does-chatgpt-violate-new-york-times-copyrights/

[19] This reference to fair use may well rest on an action concerning Google Books in 2005 where Google claimed fair use in its plan to digitise out-of-print books. See Andres Guadamuz, ‘Google and Book Publishers Settle’ (WIPO Magazine, July 2009) <https://www.wipo.int/wipo_magazine/ en/2009/04/article_0004.html>

[20] Ibid.

[21] Andres Guadamuz, A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs, GRUR International, Volume 73, Issue 2, February 2024, Pages 111–127, https://doi.org/10.1093/grurint/ikad140

[22] Citing Tom B Brown and others, ‘Language Models Are Few-Shot Learners’ (arXiv, 22 July 2020) <http://arxiv.org/abs/2005.14165

[23] Andres Guadamuz, A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs, GRUR International, Volume 73, Issue 2, February 2024, Pages 111–127 at 112. see https://academic.oup.com/grurint/article/73/2/111/7529098?searchresult=1

citing Tom B Brown and others, ‘Language Models Are Few-Shot Learners’ (arXiv, 22 July 2020) <http://arxiv.org/abs/2005.14165

[24] https://www.nytimes.com/2024/04/06/technology/ai-data-tech-companies.html?searchResultPosition=1

[25] Andres Guadamuz, A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs, GRUR International, Volume 73, Issue 2, February 2024, Pages 111–127 at 113. see https://academic.oup.com/grurint/article/73/2/111/7529098?searchresult=1

[26] Andres Guadamuz, A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs, GRUR International, Volume 73, Issue 2, February 2024, Pages 111–127 at 115. see https://academic.oup.com/grurint/article/73/2/111/7529098?searchresult=1. Emphasis added.

[27] The Economist explains that “AIs are trained on vast quantities of human-made work, from novels to photos and songs. These training data are broken down into “tokens”—numerical representations of bits of text, image or sound—and the model learns by trial and error how tokens are normally combined. Following a prompt from a user, a trained model can then make creations of its own. More and better training data means better outputs.” https://www.economist.com/business/2024/04/14/generative-ai-is-a-marvel-is-it-also-built-on-theft

[28] Andres Guadamuz, A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs, GRUR International, Volume 73, Issue 2, February 2024, Pages 111–127 at 115. see https://academic.oup.com/grurint/article/73/2/111/7529098?searchresult=1.

[29] Andres Guadamuz, A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs, GRUR International, Volume 73, Issue 2, February 2024, Pages 111–127 at 117. see https://academic.oup.com/grurint/article/73/2/111/7529098?searchresult=1

[30] Ibid.

[31] Ibid.

[32] Ibid.

[33] This is described by the author as potentially “difficult to prove, as we may have to look at the technology in detail to see if an output constitutes a copy” Andres Guadamuz, A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs, GRUR International, Volume 73, Issue 2, February 2024, Pages 111–127 at 121, see https://academic.oup.com/grurint/article/73/2/111/7529098?searchresult=1

[34] “There must be a causal connection between the original work and the alleged infringing copy. This is to avoid cases of independent creation where one work resembles another by coincidence, or because both authors were inspired by similar works.” Andres Guadamuz, A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs, GRUR International, Volume 73, Issue 2, February 2024, Pages 111–127 at 123 citing Francis Day v Bron Francis Day & Hunter v Bron [1963] Ch 587. see https://academic.oup.com/grurint/article/73/2/111/7529098?searchresult=1

[35] The author states: “Exact replication is likely to be rare given the vast amounts of training datapoints mentioned above, and the relatively small number of verbatim copies found in the literature, even when setting out to try to obtain a replica. So, most of the potentially infringing outputs would be partial or inexact copies, and the legal question becomes one of similarity between the input and the output.” Andres Guadamuz, A Scanner Darkly: Copyright Liability and Exceptions in Artificial Intelligence Inputs and Outputs, GRUR International, Volume 73, Issue 2, February 2024, Pages 111–127 at 124. see https://academic.oup.com/grurint/article/73/2/111/7529098?searchresult=1

[36] https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf at para 80.

[37] .”  It states:

“193. The Times gathers information, which often takes the form of time-sensitive breaking news, for its content at a substantial cost to The Times. Wirecutter likewise compiles and produces time-sensitive recommendations for readers. 

194. By offering content that is created by GenAI but is the same or similar to content published by The Times, Defendants’ GPT models directly compete with Times content. Defendants’ use of Times content encoded within models and live Times content processed by models produces outputs that usurp specific commercial opportunities of The Times, such as the revenue generated by Wirecutter recommendations. For example, Defendants have not only copied Times content, but also altered the content by removing links to the products, thereby depriving The Times of the opportunity to receive referral revenue and appropriating that opportunity for Defendants. 

195. Defendants’ use of Times content to train models that produce informative text of the same general type and kind that The Times produces competes with Times content for traffic. 

196. Defendants’ use of Times content without The Times’s consent to train Defendants’ GenAI models constitutes free-riding on The Times’s significant efforts and investment of human capital to gather this information. 

197. Defendants’ misuse and misappropriation of Times content has caused The Times to suffer actual damages from the deprivation of the benefits of its work, such as, without limitation, lost advertising and affiliate referral revenue.”   

The Times also claimed for trade mark dilution:

200.         The Times’s trademarks are distinctive and famous. 

201.         Defendants have, in connection with the commerce of producing GenAI to users for profit throughout the United States, including in New York, engaged in the unauthorized use of The Times’s trademarks in outputs generated by Defendants’ GPT-based products. 

202.         Defendants’ unauthorized use of The Times’s marks on lower quality and inaccurate writing dilutes the quality of The Times’s trademarks by tarnishment in violation of 15 U.S.C § 1125(c). 

203.         Defendants are aware that their GPT-based products produce inaccurate content that is falsely attributed to The Times and yet continue to profit commercially from creating and attributing inaccurate content to The Times. As such, Defendants have intentionally violated 15 U.S.C § 1125(c). 

 As an actual and proximate result of the unauthorized use of The Times’s trademarks, The Times has suffered and continues to suffer harm by, among other things, damaging its reputation for accuracy, originality, and quality, which has and will continue to cause it economic loss. (https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf)

[38] In reply Open AI stated as follows:

“The Complaint includes two examples of ChatGPT allegedly regurgitating training data consisting of Times articles. Compl. ¶¶ 104–07. In both, the Times asked ChatGPT questions about popular Times articles, including by requesting quotes.  (…)  Each time, ChatGPT provided scattered and out-of-order quotes from the articles in question. 

In its Complaint, the Times reordered those outputs (and used ellipses to obscure their original location) to create the false impression that ChatGPT regurgitated sequential and uninterrupted snippets of the articles. (…) In any case, the regurgitated text represents only a fraction of the articles, (…)  (105 words from 16,000+ word article), all of which the public can already access for free on third-party websites.” (https://fingfx.thomsonreuters.com/gfx/legaldocs/byvrkxbmgpe/OPENAI%20MICROSOFT%20NEW%20YORK%20TIMES%20mtd.pdf)It should be noted that in February 2024 OpenAI asked a federal judge to dismiss parts of The New York Times’ lawsuit against it, arguing that the newspaper “hacked” its chatbot ChatGPT and other artificial-intelligence systems to generate misleading evidence for the case.https://www.reuters.com/technology/cybersecurity/openai-says-new-york-times-hacked-chatgpt-build-copyright-lawsuit-2024-02-27/

[39] https://www.nytimes.com/2024/02/07/business/media/new-york-times-q4-earnings.html#:~:text=At%20the%20end%20of%20the,million%20of%20them%20digital%2Donly.&text=The%20New%20York%20Times%20Company,billion%20for%20the%20first%20time.

[40] https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growing-app

[41] https://www.cnbc.com/2023/11/30/chatgpts-one-year-anniversary-how-the-viral-ai-chatbot-has-changed.html#:~:text=It%20didn’t%20take%20long,users%2C%20per%20a%20UBS%20study.

[42] https://news.bloomberglaw.com/ip-law/openai-microsoft-sued-by-newspapers-over-copyrighted-inputs

[43] https://www.bakerlaw.com/daily-news-v-microsoft/

[44] https://news.bloomberglaw.com/ip-law/google-hit-with-copyright-class-action-over-imagen-ai-model

[45] https://www.congress.gov/bill/118th-congress/house-bill/7913

[46] https://www.theguardian.com/technology/2024/apr/09/artificial-intelligence-bill-copyright-art

[47] Barry Scannell, When Irish AIs are smiling: could Ireland’s legislative approach be a model for resolving AI authorship for EU member states?, Journal of Intellectual Property Law & Practice, Volume 17, Issue 9, September 2022, Pages 727–740, https://doi.org/10.1093/jiplp/jpac068

[48] Ibid at 728.

[49] See chapter 4.

[50] 11,000 artists signed a letter warning against the threat of artificial intelligence to the creative industries in October 2024 (https://www.ft.com/content/c7c0e8bf-9cdd-4a42-8e01-d2a36ba06298)

[51] Recital 60i emphasis added. 

[52] J McCutcheon, ‘The vanishing author in computer-generated works: a critical analysis of recent Australian Case Law’ (2013) 36 Melbourne University Law Review 915–969, 929.

[53] J McCutcheon, ‘Curing the Authorless Void: Protecting Computer-Generated Works following IceTV and Phone Directories’ (2013) 37(1) Melbourne University Law Review 46. 

[54] See also Perritt, Henry H. Jr. “Copyright for Robots?.” Indiana Law Review, vol. 57, no. 1, 2023, pp. 139-198. HeinOnline, https://heinonline-org.ucd.idm.oclc.org/HOL/P?h=hein.journals/indilr57&i=186.

[55] See https://www.bakerlaw.com/concord-music-group-inc-v-anthropic-pbc/ See also separate copyright infringement proceedings (which has been settled) against the same defendant from August 2024: https://www.ft.com/content/e6a4dcae-2bda-42de-8112-768844673cea and https://www.reuters.com/sustainability/boards-policy-regulation/anthropic-settles-class-action-us-authors-alleging-copyright-infringement-2025-08-26/ In June 2025 the Court ruled that training of the AI using the copyrighted material constituted “fair use” but storage of 7 million books violated copyright. https://www.theguardian.com/technology/2025/jun/25/anthropic-did-not-breach-copyright-when-training-ai-on-books-without-permission-court-rules The storage of copyrighted files in this way has been described as “Shadow Libraries” in one source: see Michelle Rademeyer, Niloufer Selvadurai, Out from the shadows: developing effective copyright laws for AI training datasets and shadow libraries, Journal of Intellectual Property Law & Practice, 2025;, jpaf072, https://doi.org/10.1093/jiplp/jpaf072

[56] https://www.ft.com/content/0965d962-5c54-4fdc-aef8-18e4ef3b9df5

[57]https://fingfx.thomsonreuters.com/gfx/legaldocs/zgpokbjynpd/UNIVERSAL%20MUSIC%20ANTHROPIC%20LAWSUIT%20response.pdf

[58] Ibid. 

[59] Ibid.

[60] Ibid.

[61] Ibid.

[62] Ibid.

[63] Ibid

[64]https://fingfx.thomsonreuters.com/gfx/legaldocs/znvnklkgwpl/UNIVERSAL%20MUSIC%20ANTHROPIC%20LAWSUIT%20reply.pdf

[65] Ibid

[66] Ibid.

[67] Ibid.

[68] https://www.bloomberg.com/news/articles/2024-05-16/sony-music-warns-companies-to-stop-training-ai-on-its-artists-content?embedded-checkout=true

[69] https://www.nytimes.com/2024/05/30/arts/music/popcast-artificial-intelligence-ai.html?searchResultPosition=2

[70] Sabine Jacques, Mathew Flynn, Protecting Human Creativity in AI-Generated Music with the Introduction of an AI-Royalty Fund, GRUR International, 2024 at p. 13 ikae134, https://doi.org/10.1093/grurint/ikae134

[71] https://www.nytimes.com/2024/06/25/arts/music/record-labels-ai-lawsuit-sony-universal-warner.html?searchResultPosition=6

[72] [2023] EWHC 3090 (Ch)

[73] https://www.courtlistener.com/docket/66788385/getty-images-us-inc-v-stability-ai-inc/ See here for updates: https://www.bakerlaw.com/getty-images-v-stability-ai/

[74] Contrary to section 22 or 23 of the Copyright, Designs and Patents Act 1988 (CDPA)

[75] The argument turns on whether sections 22 and 23 of the CDPA are limited to dealings in articles which are tangible things or whether they may also encompass dealings in intangible things, such as, in this instance, making available software on a website.

[76]  https://www.thetimes.com/uk/technology-uk/article/getty-images-ai-lawsuit-9x6pvzncc

[77] Technology has arguably subsequently achieved this capability. See below. 

[78] Barry Scannell, When Irish AIs are smiling: could Ireland’s legislative approach be a model for resolving AI authorship for EU member states?, Journal of Intellectual Property Law & Practice, Volume 17, Issue 9, September 2022, Pages 727–740, https://doi.org/10.1093/jiplp/jpac068 at 729.

[79] https://www.nytimes.com/2024/07/19/technology/generative-ai-getty-shutterstock.html?searchResultPosition=7

[80] https://www.iptechblog.com/2023/07/copyright-protection-for-ai-works-uk-vs-us/

[81] Re ‘Zarya of the Sawn (Registration VAu001480196 https://www.copyright.gov/docs/zarya-of-the-dawn.pdfhttps://www.iptechblog.com/2023/07/copyright-protection-for-ai-works-uk-vs-us/

[82] https://www.copyright.gov/rulings-filings/review-board/docs/a-recent-entrance-to-paradise.pdf

[83] Citing Trade-Mark cases 100 US 82, 94 (1879)

[84] 111 U.S. 53 (1884) See https://supreme.justia.com/cases/federal/us/111/53/

[85] 111 U.S. 53, 58 (1884)

[86] 111 U.S. 53, 60-61 (1884)

[87] 347 U.S. 201 (1954) See https://supreme.justia.com/cases/federal/us/347/201/

[88] 347 U.S. 201, 214 (1954)

[89] 412 U.S. 546 (1973)

[90] Congress did not, in passing the Copyright Act of 1909, determine that recordings, as original writings, were unworthy of all copyright protection. 412 U.S. 546, 563-566 (1973). 

[91] https://www.copyright.gov/rulings-filings/review-board/docs/a-recent-entrance-to-paradise.pdf

[92]  114 F.3d 955, 957–59 (9th Cir. 1997)

[93] Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018)No. 16-15469

[94] Kelley v Chicago Park Dist 635 F3d290, 304 (7th Cir 2011)

[95] Satava v Lowry 323 F 3d 805, 813 (9th Cir 2003)

[96] https://www.copyright.gov/rulings-filings/review-board/docs/a-recent-entrance-to-paradise.pdf

[97] Second Request for Reconsideration for Refusal to Register Théâtre D’opéra Spatial (SR # 1-11743923581; Correspondence ID: 1-5T5320R) see https://www.copyright.gov/rulings-filings/review-board/docs/Theatre-Dopera-Spatial.pdf

[98] https://www.copyright.gov/rulings-filings/review-board/docs/Theatre-Dopera-Spatial.pdf

[99] “While Mr. Allen did not disclose in his application that the Work was created using an AI system” see https://www.copyright.gov/rulings-filings/review-board/docs/Theatre-Dopera-Spatial.pdf

[100] Ibid. 

[101] Ibid. 

[102] 88 Fed. Reg. at 16,192.

[103] Ibid. at 16,192–93. Emphasis added

[104] https://www.copyright.gov/rulings-filings/review-board/docs/Theatre-Dopera-Spatial.pdf

[105] https://ipwatchdog.com/2023/06/29/u-s-copyright-office-generative-ai-event-three-key-takeaways/id=162771/

[106] “As a constitutional matter, copyright protects only those constituent elements of a work that possess more than a de minimis quantum of creativity”  499 U.S. 340 (1991) see https://supreme.justia.com/cases/federal/us/499/340/

[107] https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence

[108] https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence

[109] Second Request for Reconsideration for Refusal to Register SURYAST (SR # 1-11016599571; Correspondence ID: 1-5PR2XKJ) see https://www.copyright.gov/rulings-filings/review-board/docs/SURYAST.pdf

[110] https://www.copyright.gov/rulings-filings/review-board/docs/SURYAST.pdf

[111] Email from Ankit Sahni to U.S. Copyright Office, Attach. (Apr. 14, 2022) (“Sahni AI Description”). See https://www.copyright.gov/rulings-filings/review-board/docs/SURYAST.pdf

[112] Ibid.

[113] https://www.copyright.gov/rulings-filings/review-board/docs/SURYAST.pdf emphasis added. 

[114] https://www.copyright.gov/rulings-filings/review-board/docs/SURYAST.pdf

[115] https://www.copyright.gov/rulings-filings/review-board/docs/SURYAST.pdf

[116] https://ipwatchdog.com/2023/02/23/u-s-copyright-office-clarifies-limits-copyright-ai-generated-works/id=157023/

[117] “Midjourney begins the image generation process with a field of visual “noise,” which is refined based on tokens created from user prompts that relate to Midjourney’s training database. The information in the prompt may “influence” generated image, but prompt text does not dictate a specific result.” See https://www.copyright.gov/docs/zarya-of-the-dawn.pdf

[118] “We conclude that Ms. Kashtanova is the author of the Work’s text as well as the selection, coordination, and arrangement of the Work’s written and visual elements. That authorship is protected by copyright.” See https://www.copyright.gov/docs/zarya-of-the-dawn.pdf

[119] https://www.copyright.gov/docs/zarya-of-the-dawn.pdf

[120] “Because the current registration for the Work does not disclaim its Midjourney-generated content, we intend to cancel the original certificate issued to Ms. Kashtanova and issue a new one covering only the expressive material that she created.” See https://www.copyright.gov/docs/zarya-of-the-dawn.pdf

[121] https://www.legislation.gov.uk/ukpga/1988/48/section/178

[122] https://www.legislation.gov.uk/ukpga/1988/48/section/9

[123] J McCutcheon notes that a number of jurisdictions have enacted the same or similar: Copyright Ordinance(Hong Kong) cap 528, s 11(3); Copyright and Related Rights Act 2000 (NI) s 21(f); Copyright Act 1994 (NZ) s 5(2); Copyright Act 1978 (South Africa) s 1(1) (definition of ‘author’). A similar provision is found in the Copyright Act 1957 (India). See J McCutcheon, ‘Curing the Authorless Void: Protecting Computer-Generated Works following IceTV and Phone Directories’ (2013) 37(1) Melbourne University Law Review 46 at 50. 

[124] [2023] EWHC 3090 (Ch)

[125] https://www.courtlistener.com/docket/66788385/getty-images-us-inc-v-stability-ai-inc/

[126] https://www.iptechblog.com/2023/07/copyright-protection-for-ai-works-uk-vs-us/

[127] https://www.iptechblog.com/2023/07/copyright-protection-for-ai-works-uk-vs-us/

[128] https://www.iptechblog.com/2023/07/copyright-protection-for-ai-works-uk-vs-us/ Emphasis added

[129] https://copyrightblog.kluweriplaw.com/2024/05/28/artificial-intelligence-and-copyright-the-italian-ai-law-proposal/#:~:text=Article%2024%20of%20the%20AI%20Law%20Proposal%20also%20introduces%20a,and%2070%2Dquarter”).

[130] https://www.economist.com/business/2024/04/14/generative-ai-is-a-marvel-is-it-also-built-on-theft

[131] https://www.economist.com/business/2024/04/14/generative-ai-is-a-marvel-is-it-also-built-on-theft

[132] The New York Times v Open AI.

[133] Getty v Stability AI

[134] J McCutcheon, ‘Curing the Authorless Void: Protecting Computer-Generated Works following IceTV and Phone Directories’ (2013) 37(1) Melbourne University Law Review 46 at 78. 

[135] The author adjusted a similar definition made by the Copyright Law Review Committee (CLRC) in its Draft Report on Computer Software Protection (1993) at 13.18. 

[136] Draft Report on Computer Software Protection (1993) at 13.20.

[137] J McCutcheon, ‘Curing the Authorless Void: Protecting Computer-Generated Works following IceTV and Phone Directories’ (2013) 37(1) Melbourne University Law Review 46 at 80.

[138] https://eur-lex.europa.eu/lexUriServ/LexUriServ.do?uri=CELEX%3A31996L0009%3AEN%3AHTML

[139] Abstract, Tianxiang He, AI Originality Revisited: Can We Prompt Copyright over AI-Generated Pictures?, GRUR International, 2024;, ikae024, https://doi.org/10.1093/grurint/ikae024. Also He, Tianxiang, AI Originality Revisited: Can We Prompt Copyright over AI-Generated Pictures? GRUR International: Journal of European and International IP Law , Volume 73 (4): 9 – Mar 7, 2024, at 300.

[140] He, Tianxiang, AI Originality Revisited: Can We Prompt Copyright over AI-Generated Pictures? GRUR International: Journal of European and International IP Law , Volume 73 (4): 9 – Mar 7, 2024, at 300.

[141] Ibid.

[142] Ibid at 300.

[143] One source considers Liu as wrongly decided: Qian Wang, Creation Is Not Like a Box of Chocolates: Why Is the First Judgment Recognizing Copyrightability of AI-Generated Content Wrong?, GRUR International, 2024;, ikae082, https://doi.org/10.1093/grurint/ikae082 stating: “First, the mistaken perception of AI as a tool of creation akin to a brush, camera, and Photoshop. Second, the failure to analyse the nature of “user’s inout: according to the legal definition of an “act of creation,” without properly distinguishing between intellectual input as an idea and as an expression. Third, wrongly attributing user’s authorship of AI=generated content to AI’s lack of free will and legal personality.” (at p. 6)

[144] Ibid.

[145] Ibid at 301.

[146] Ibid at 302 citing Guobin Cui ‘Users’ Original Contribution in AI-generated Contents’ (2023) 6 China Copyright 18.

[147] Ibid at 304.

[148] Ibid.

[149] Public Views on Artificial Intelligence and Intellectual Property USPTO (2020). See also Pheh Hoon Lim, Phoebe Li, Artificial intelligence and inventorship: patently much ado in the computer program, Journal of Intellectual Property Law & Practice, Volume 17, Issue 4, April 2022, Pages 376–386, https://doi.org/10.1093/jiplp/jpac019

[150] Public Views on Artificial Intelligence and Intellectual Property USPTO (2020) at p. ii, emphasis added.

[151] China had also considered the issue previously of simple text-generated AI writers such as Dreamwriter in the cases Tencent v Yingxun (2019) Y0305MC, No. 14010 (“AI-created works can be copyrighted under Chinese law, just like those created by human beings. In a recent case, a local court in Shenzhen provided this clear position for the first time in China.” See https://www.chinajusticeobserver.com/law/x/2019-yue-0305-min-chu-14010) and in Film v Baidu (2018), J0491MC, No. 239 (“AI-generated content is not a work created by an author”- See https://english.bjinternetcourt.gov.cn/2021-12/20/c_492.htm).

[152] Beijing Internet Court (2023), J0491MC, No 11279, also (2023) Jing 0491 Min Chu No. 11279 See https://wxb.xzdw.gov.cn/wlaq/aqdt/202312/t20231219_426887.html. See also text of decision in GRU International 73(4), 2024 360-368 also in GRUR International at doi.org/10.1093/grurint/ikae025.

[153] GRU International 73(4), 2024 360 at 364.

[154] https://wxb.xzdw.gov.cn/wlaq/aqdt/202312/t20231219_426887.html

[155] (2019) Y0305MC, No. 14010. 

[156] See https://www.chinajusticeobserver.com/law/x/2019-yue-0305-min-chu-14010.

[157] https://www.chinajusticeobserver.com/law/x/2019-yue-0305-min-chu-14010

[158] https://www.chinajusticeobserver.com/law/x/2019-yue-0305-min-chu-14010

[159] Ryan Abbott & Elizabeth Rothman, Disrupting Creativity: Copyright Law in the Age of Generative Artificial Intelligence, 75 Fla. L. Rev. 1141 (2023) at 1201. 

[160] The Economist also notes that litigation is in contemplation in the area of AI-generated computer coding, an area, “only thinly protected” – a group of programmers is claiming that Microsoft’s GitHub Copilot and OpenAI’s CodexComputer infringed their copyright by training on their work. https://www.economist.com/business/2024/04/14/generative-ai-is-a-marvel-is-it-also-built-on-theft The litigation is entitled Doe v Github Inc and status updates are available here: https://www.bakerlaw.com/the-copilot-litigation/

[161] Lemley, Mark A., How Generative AI Turns Copyright Upside Down (July 21, 2023). Available at SSRN: https://ssrn.com/abstract=4517702 or http://dx.doi.org/10.2139/ssrn.4517702

[162] Law Society Gazette (UK) https://www.lawgazette.co.uk/legal-updates/monkey-selfie-row/5042631.article The Irish Law Society Gazette also featured this story on its cover in 2014 but the case is originally from the UK.

[163] Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018)No. 16-15469 https://law.justia.com/cases/federal/appellate-courts/ca9/16-15469/16-15469-2018-04-23.html see also https://web.archive.org/web/20160109005741/https://arstechnica.com/tech-policy/2016/01/judge-says-monkey-cannot-own-copyright-to-famous-selfies/

[164] https://web.archive.org/web/20140814172455/http://lightbox.time.com/2014/08/06/monkey-selfie/#1

[165] https://www.itv.com/news/2014-08-06/wikipedia-refuses-to-delete-photo-as-monkey-owns-it

[166] A Chinese Court raised this issue as a hypothetical question to be decided: “In an era of advanced AI, robots will have self-awareness and be completely independent. Will an article generated in such a context be identified as a work?” Film v Baidu(2018), J0491MC, No. 239. See https://english.bjinternetcourt.gov.cn/2021-12/20/c_492.htm

[167] Liability of an AI Service Provider for Copyright Infringement, GRUR International, 2024;, ikae102, https://doi.org/10.1093/grurint/ikae102

[168] Ibid at p. 8

[169] https://docs.justia.com/cases/federal/district-courts/delaware/dedce/1:2020cv00613/72109/1 See here for status updates: https://www.bakerlaw.com/thomson-reuters-v-ross/

[170] https://www.perkinscoie.com/en/news-insights/recent-rulings-in-ai-copyright-lawsuits-shed-some-light-but-leave-many-questions.html#:~:text=ROSS%20Intelligence%20Inc.%2C%20which%20was,was%20developed%20by%20ROSS%20Intelligence.

[171] https://llmlitigation.com/pdf/03417/kadrey-meta-complaint.pdf

[172] https://www.perkinscoie.com/en/news-insights/recent-rulings-in-ai-copyright-lawsuits-shed-some-light-but-leave-many-questions.html#:~:text=ROSS%20Intelligence%20Inc.%2C%20which%20was,was%20developed%20by%20ROSS%20Intelligence.

[173] https://stablediffusionlitigation.com/pdf/00201/1-1-stable-diffusion-complaint.pdf see case updates here: https://www.bakerlaw.com/andersen-v-stability-ai/

[174] https://fingfx.thomsonreuters.com/gfx/legaldocs/byprrngynpe/AI%20COPYRIGHT%20LAWSUIT%20mtdruling.pdf

[175] https://www.perkinscoie.com/en/news-insights/recent-rulings-in-ai-copyright-lawsuits-shed-some-light-but-leave-many-questions.html#:~:text=ROSS%20Intelligence%20Inc.%2C%20which%20was,was%20developed%20by%20ROSS%20Intelligence.

[176] https://www.perkinscoie.com/en/news-insights/recent-rulings-in-ai-copyright-lawsuits-shed-some-light-but-leave-many-questions.html#:~:text=ROSS%20Intelligence%20Inc.%2C%20which%20was,was%20developed%20by%20ROSS%20Intelligence.

[177] Another author refers to this prospect when she states: “The definition of the subject of liability is very important to investigate the tort liability and compensate the victim. Typically, the responsible parties include AI developers, providers, users, and other parties that may be involved. The liability of each subject shall be determined according to its degree of control over the infringement, its degree of participation and its degree if fault.” Ding Ling, Analysis on Tort Liability of Generative Artificial Intelligence. Science of Law Journal (2023) Vol. 2: 102-107. DOI: http://dx.doi.org/DOI: 10.23977/law.2023.021215 at 105. It’s also referred to as follows: “[E]ven if a user were directly liable for infringement, the AI company could potentially face liability under the doctrine of ‘vicarious infringement’, which applies to defendants who have ‘the right and ability to supervise the infringing activity’ and ‘a direct financial interest in such activities’”. Zirpoli, CRS Legal Sidebar (February 23, 2023) 10922 see https://crsreports.congress.gov/product/pdf/LSB/LSB10922

[178] https://www.perkinscoie.com/en/news-insights/recent-rulings-in-ai-copyright-lawsuits-shed-some-light-but-leave-many-questions.html#:~:text=ROSS%20Intelligence%20Inc.%2C%20which%20was,was%20developed%20by%20ROSS%20Intelligence.

[179] Novelli, Claudio and Casolari, Federico and Hacker, Philipp and Spedicato, Giorgio and Floridi, Luciano, Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity (January 14, 2024). Available at SSRN: https://ssrn.com/abstract=4694565 or http://dx.doi.org/10.2139/ssrn.4694565

[180] Ibid at 15.

[181] Ibid at 16 citing European Parliament resolution of 20 October 2020 on intellectual property rights for the development of artificial intelligence technologies, 2020/2015(INI), par 15.

[182] Ibid at 16.

[183] Citing Sammarco 2020; Klawonn 2019.

[184] Ibid at 16.

[185] https://platform.openai.com/docs/gptbot

[186] “Web pages crawled with the GPTBot user agent may potentially be used to improve future models and are filtered to remove sources that require paywall access, are known to primarily aggregate personally identifiable information (PII), or have text that violates our policies. Allowing GPTBot to access your site can help AI models become more accurate and improve their general capabilities and safety. Below, we also share how to disallow GPTBot from accessing your site.” https://platform.openai.com/docs/gptbot)

[187] Novelli, Claudio and Casolari, Federico and Hacker, Philipp and Spedicato, Giorgio and Floridi, Luciano, Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity (January 14, 2024). Available at SSRN: https://ssrn.com/abstract=4694565 or http://dx.doi.org/10.2139/ssrn.4694565 at 17. 

[188] Ibid at 18.

[189] Recital 8 to Directive (EU) 2019/790 of the European Parliament and of the Council on copyright and related rights in the Digital Single Market.

[190] See comments of Hyland, in the article, Hallissey, Art in the age of mechanical reproduction, Law Society Gazette, July 2024, at p. 52: “It is concerning that the language surrounding the reservation of the use of works in article 4 is not entirely clear.” One source says an ex-post withdrawal of consent for use of copyright data for training an AI model via email should be permissible: Hongjiao Zhang, Yahong Li, Opt-Out Implied Licenses in Copyright Law: from Search Engines to GPAI Models, GRUR International, 2024;, ikae088, https://doi.org/10.1093/grurint/ikae088 at p.11. See also another source that points to a dramatic reduction in data now available to Artificial Intelligence as a result of restrictions put on the use of the data: https://www.nytimes.com/2024/07/19/technology/ai-data-restrictions.html?searchResultPosition=5  There were also reports that the UK government are considering an opt-out regime where models could be trained on copyrighted data unless publishers opted-out. https://www.ft.com/content/26bc3de1-af90-4c69-9f53-61814514aeaa

[191] https://www.ft.com/content/07611b74-3d69-4579-9089-f2fc2af61baa

[192] https://www.lawsociety.ie/gazette/in-depth/eu-copyright-and-ip-protection

[193] Readers may be interested in the equivalent provision in the United States of America, the so-called, “safe harbor” protections against legal liability for any content users post online platforms, at section 230 of the Communications Decency Act 1996:   

[194] Pursuant to the European Union (Copyright and Related Rights in the Digital Single Market) Regulations 2021.

[195] Ibid at 20.

[196] Ibid at 19. Emphasis added.

[197] Ibid. 

[198] Ibid. 

[199] Ibid. 

[200] Ibid.

[201] Citing Art 3(1) of the Database Directive; Art 6 of Directive 2006/116/EC of the European Parliament and of the Council of 12 December 2006 on the term of protection of copyright and certain related rights (“Term Directive”), OJ L 372, 27.12.2006, p. 12 – 18; Art 1(3) of the Directive 2009/24/EC of the European Parliament and of the Council of 23 April 2009 on the legal protection of computer programs, OJ L111, 5.5.2009, p. 16-22.

[202] CJEU, 1 December 2011, case C-145/10, Painer, ECLI:EU:C:2011:708. https://curia.europa.eu/juris/document/document.jsf?text=&docid=115785&pageIndex=0&doclang=en&mode=lst&dir=&occ=first&part=1&cid=7968279

[203] Emphasis added. CJEU, 1 December 2011, case C-145/10, Painer, ECLI:EU:C:2011:708.https://curia.europa.eu/juris/document/document.jsf?text=&docid=115785&pageIndex=0&doclang=en&mode=lst&dir=&occ=first&part=1&cid=7968279

[204] Citing European Parliament resolution of 20 October 2020 on intellectual property rights for the development of artificial intelligence technologies, 2020/2015(INI), par 15.

[205] CJEU, 1 December 2011, case C-145/10, Painer, ECLI:EU:C:2011:708.

[206] Novelli, Claudio and Casolari, Federico and Hacker, Philipp and Spedicato, Giorgio and Floridi, Luciano, Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity (January 14, 2024). Available at SSRN: https://ssrn.com/abstract=4694565 or http://dx.doi.org/10.2139/ssrn.4694565 at 20.

[207] Ibid at 21.

[208] Ibid.

[209] Citing a decision of the Legal Board of Appeal of the EPO in its decision in case j 8/20 (DABUS) confirming that under the European Patent Convention (EPC) an inventor designated in a patent application must be a human being.

[210] Directive (EU) 2019/790.

[211] See Alexander Peukert, Copyright in the Artificial Intelligence Act – A Primer, GRUR International, 2024;, ikae057, https://doi.org/10.1093/grurint/ikae057. Also The Financial Times reported that the UK Government is set to produce rules around transparency in the training process of AI Models. https://www.ft.com/content/17f4c7ee-b1bc-4bde-8e92-bebb555479a2

[212] Alexander Peukert, Copyright in the Artificial Intelligence Act – A Primer, GRUR International, 2024;, ikae057, https://doi.org/10.1093/grurint/ikae057 at p. 13.

[213] Kwan Yiu Cheng, The law of digital afterlife: the Chinese experience of AI ‘resurrection’ and ‘grief tech’, International Journal of Law and Information Technology, Volume 33, Issue 1, 2025, eaae029, https://doi.org/10.1093/ijlit/eaae029

[214] See https://www.theguardian.com/media/2022/may/17/who-owns-einstein-the-battle-for-the-worlds-most-famous-face

[215] https://www.nytimes.com/2011/03/28/opinion/28madoff.html

[216] https://www.congress.gov/congressional-record/congressional-record-index/118th-congress/2nd-session/nurture-originals-foster-art-and-keep-entertainment-safe-no-fakes-act/1920107

[217] https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240AB1836

[218] Kwan Yiu Cheng, The law of digital afterlife: the Chinese experience of AI ‘resurrection’ and ‘grief tech’, International Journal of Law and Information Technology, Volume 33, Issue 1, 2025, eaae029, https://doi.org/10.1093/ijlit/eaae029 at p. 16

[219] “Generative AI is sucking up cash, electricity, water, copyrighted data.” Financial Times, 6th April, AI keeps going wrong. What if it can’t be fixed? See also The Economist, Generative AI is a marvel. Is it also built on theft? https://www.economist.com/business/2024/04/14/generative-ai-is-a-marvel-is-it-also-built-on-theft

[220] https://unu.edu/publication/recommendations-use-synthetic-data-train-ai-models#:~:text=Using%20synthetic%20or%20artificially%20generated,quality%2C%20security%20and%20ethical%20implications.

[221] https://www.yalelawjournal.org/forum/artificial-why-copyright-is-not-the-right-policy-tool-to-deal-with-generative-ai

[222] Ibid

[223] Ibid.

[224] https://www.ft.com/content/d267665e-abfa-477c-85d8-7ca43e82b652

Chapter 3

Artificial Intelligence and other Intellectual Property rights

Introduction

This chapter will consider other intellectual property rights not covered in chapter 2. Primarily the chapter will consider the issue of patent protection and will continue with a discussion from the previous chapter on the concept of a “predominantly human intellectual activity” – in other words the extent to which an LLM can assist a human in drawing a patent application. This concept of predominantly human activity points to the idea that greater human involvement is more likely to yield a positive outcome. The reader will be aware that in order to proceed to grant an application for a patent must demonstrate an inventive step, novelty, and industrial application. The use of Artificial Intelligence does not touch specifically on any one of these, but, rather, it speaks to the inventive process as a whole. The chapter will consider how machines are capable of radically altering the landscape in the area of patent drafting where LLMs become an ever-increasing part of the inventive process. The chapter will also consider the case where an AI system was named as an inventor in various jurisdictions and will consider the outcomes in each as well as considering EPO outreach on the issue of inventorship and Artificial Intelligence. Finally the chapter will consider whether Artificial Intelligence technology can ever, itself, be the subject of a patentable invention. 

Patents

The concept of a predominantly human intellectual activity can be recognised, not just in the field of copyright, but also in the field of patents.[1] One online source considers the current generation of Large Language Models (LLMs) in the field of drafting of patent applications with a view to establishing their methodological capabilities:

“However, even ignoring the subject-matter bias of current tools, LLM tools for patent drafting and prosecution still have some considerable limitations. LLM are undeniably very good at providing generic text on a topic for which the internet provides extensive guidance, and for which a deep understanding of complex specialist technical issues is not required. LLM are similarly very good at summarising and reformatting text whilst retaining the pre-existing meaning of the text, e.g. from a bullet point list into paragraphs. However, both of these tasks are far removed from the detailed technical verbal reasoning required for patent drafting and prosecution. 

LLM are thus good at providing long-form novelty or inventive step arguments, provided that they are spoon-fed the argumentative points in the form of prompts. LLM may also be used to draft claims when given a detailed description of the essential features of a mechanical invention (and vice versa). Left to its own initiative to determine and claim the inventive concept of an invention, or to devise its own novelty or inventive step argument in view of cited prior art, a LLM will also be able to provide text that makes sense, is error free and superficially responds to an Examiner’s objections.”[2]

Still, one commentator considers that machines are set to radically alter the landscape in this area:

“In some cases, a computer’s output constitutes patentable subject matter, and the computer rather than a person meets the requirements for inventorship. As such machines become an increasingly common part of the inventive process, they may replace the standard of the person skilled in the art now used to judge nonobviousness. Creative computers require a rethinking of the criteria for inventiveness, and potentially of the entire patent system.”[3]

Another describes the human + AI relationship in patent drafting as “Centaur” inventing and considers it presents practical challenges to various aspects of patent law including the inventorship doctrine.[4] Jasper Tran highlights the burgeoning area of AI-related patent litigation finding that the introduction of a non-human actor into a traditional patent infringement analysis means that litigants will face unique issues stemming from the dynamic nature of AI technology.[5] An article in the Financial Times tends to concur pointing out that AI tools could now be used to build cases to invalidate patent applications — with potentially damaging effects for the R&D sector.[6]  Other authors highlight how the USPTO is exploring the use of Artificial Intelligence in various ways including through the auto-classification of applications.[7] One contributor to the University of Chicago Law Review even posits that an AI invention should be patent eligible at the first opportunity while granting the benefits of patents only to deserving inventions.[1]


[1] Zuchniarz, Rogue AI Patents and the USPTO’s Rejection of Alice, University of Chicago Law Review, Vol 91.6 [2024]

Engel considers the case where an AI was named as inventor in a series of patent applications.[8] He finds that three different patent offices refused such an application – but for different reasons. The AI system was called DABUS and, as stated, was named as inventor in the application. The European Patent Office focused on formal rules for its dismissal, while the UK Intellectual Property Office consider more substantive aspects; the US patent and Trademark Office relied on statutory language. He remarks that the decisions, however, taken together, “do not rule out the patentability of AI-assisted inventions in general, as it remains possible to designate a human inventor when AI has merely facilitated the inventive process”.[9]

Engel notes that AI systems may be employed in all steps of the inventive process ranging from initial and fundamental research over selection decisions between different technical solutions up until the patent application itself, giving the example of searching prior art. He remarks that “the resulting shift away from (exclusively) human labour and creation may fundamentally affect the paradigms that inform current patent law.”[10]

In his paper he refers to a testing of the traditional view of inventorship in Autumn 2018 when two patent application were filed that explicitly mentioned an AI system as the inventor. The application were, in effect, a test-case where the application were meant to test the traditional view rather than lead to a successful patent application in and of itself. The team behind the applications included Ryan Abbott who is quoted by the author as saying that the applications were “test cases”. The filings concerned interlocking beverage containers that are easy for robots to grasp and a flashing light that flashes in a rhythm that is hard to ignore.[11] As stated these inventions were attributed to DABUS an AI system designed by Stephen Thaler which uses millions of neural networks within which there are multiple systems.[12]

The EPO refused to grant a patent which designated an AI as inventor holding that it did not fulfil the formal requirements of Art 81 EPC and rule 19(1) EPC. According to those provisions a patent application shall designate the inventor stating the family name, given names, and full address. The EPO found that listing the name of a machine did not satisfy the requirement. The EPO also developed its reasoning in respect of the requirement to name a person stating that the travaux préparatoires presuppose that inventors are persons which the EPO understands to refer to natural persons.[13]

The UK Intellectual Property Office adopted a less formal approach and addressed the question of who has the right to apply for and obtain a patent. Section 7(3) of the UK Patents Act 1977 provides a definition of the inventor as the “actual deviser of an invention” and Section 13(2)(a) makes reference to the inventor as a “person”. The UK IPO considered whether an AI could constitute a person. For this is looked at the wording in the legislation as well as at the drafting history and found that non-human inventors were not envisaged during the discussion leading to the creation of the relevant sections. The matter went as far as the Supreme Court of the United Kingdom[14] which upheld a previous decision of both the High Court and Court of Appeal which had dismissed the applicant’s assertion that the UKIPO had erred finding that: “an inventor within the meaning of the 1977 Act must be a natural person”.[15]   

In the United States of America the United States Patent and Trademark Office likewise refused to grant a patent. It focused on statutory language and said that AI cannot be considered an inventor under current US patent law. The application did not comply with the requirement to name a (human) inventor and the Office referred to the world of 35 USC § 100(a) which defines an inventor as “the individual, or, if a joint invention, the individuals collectively who invented or discovered the subject matter of the invention.” Several observations were made regarding the statutory language to “back up the notion that this individual needs to be human.”[16] This position was upheld by the Federal Circuit[1] with an continuing emphasis on the requirement for the inventor to be human.[2] The USPTO issued guidance on the issue in 2024[3] in which it did not consider AI assisted inventions were unpatentable per se but that the focus instead lies with an analysis of human contribution to the process and whether there is a significant human contribution – citing Pannu[4]. Each named inventor must contribute in some significant manner to the invention. That is, each named inventor must satisfy the following factors:


[1] Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022)

[2] See https://www.uspto.gov/sites/default/files/documents/inventorship-guidance-for-ai-assisted-inventions.pdf

[3] Inventorship guidance for AI-assisted inventions and Request for comments on February 13, 2024 (89 FR 10043). See https://www.federalregister.gov/documents/2024/02/13/2024-02623/inventorship-guidance-for-ai-assisted-inventions

[4] 155 F.3d 1344 (1998) See https://www.bitlaw.com/source/cases/patent/Pannu.html

[5] See https://www.uspto.gov/sites/default/files/documents/inventorship-guidance-for-ai-assisted-inventions.pdf

Engel states as follows:

“A debate whether AI should be considered an inventor for the purposes of patent law needs to take into account what effect this would have on patent law in general, in particular on the aims patent law seeks to achieve.”[17]

He considers the matter before pointing to “downsides”:

“Still, patent protection for AI inventions may also have downsides. The incentive for human inventorship may be reduced if AI inventions are granted the same protection. While inventive AI, once feasible, might produce inventions at a small cost, the comparatively higher human effort might not be undertaken if it is not specially rewarded.”[18]

In Japan, likewise, a court refused to register a patent where an artificial intelligence had been listed as the inventor. The Court stated: 

“The Patent Act does not envisage that an invention can be autonomously made by artificial intelligence (AI) and there is no legislative intention to protect such an AI invention by granting a patent. An ‘invention’ within the meaning of Sec. 29(1) Patent Act is limited to inventions made by a natural person.”[1]


[1] Inventors under Patent Law Must Be Natural Persons, GRUR International, 2025;, ikaf091, https://doi.org/10.1093/grurint/ikaf091citing the legislation DABUS Japan Patent Act, Secs. 29(1), 184quinqies(1). 

The EPO held a conference on inventorship by AI[19] and summarised the findings:

“The impressive developments in the area of AI have sparked suggestions that AI could invent just as humans can and that it should be accepted as inventor.

From the perspective of inventorship, three categories of AI inventions may be identified:

  1. human-made inventions using AI for the verification of the outcome 
  2. inventions in which a human identifies a problem and uses AI to find a solution
  3. AI-made inventions, in which AI identifies a problem and proposes a solution without human intervention.

In the first two categories, AI is used as a tool for human inventors, augmenting their capabilities. In the third category (AI-made inventions), scientists seem to agree that AI which could invent independently of human direction, instruction and oversight is a matter of undefined future and thus science fiction.

There is a common understanding that the inventor is a human being: the person who created the invention by their own creative activity. This has been confirmed by an academic study on AI inventorship commissioned by the EPO and in the discussions with the EPC contracting states.”[20]

Whatever of the issue around AI inventorship, in other words, naming the AI as the inventor, the literature tends to suggest there is no reason why a human, assisting by an AI, couldn’t author an invention. Kim, for example, concludes:

“As long as a human specifies instructions that determine how the input-output relation is derived through computation, and as long as computers are bound by such instructions, there is seemingly no reason why AI-aided – allegedly ‘AI-generated’ – inventions should be treated under patent law differently than inventions assisted by other types of problem-solving tools and methods as far as inventorship is concerned. Instead, the use of such techniques should be a matter of the assessment of inventive step.”[21]

In the EPO Guidelines for Examination[22] that Office now requires the mathematical methods and training data used by an AI invention to be disclosed in sufficient detail to reproduce the technical effect of the invention over the whole scope of the claims – something Ghidini refers to as a further technical effect[23] and which points, in that authors view, towards a general admissibility of machine-produced works for IP protection. This, claims the author, ignores the material duality of AI technology: the variance between accepting inputs from users on the one hand, and, effectively, using its own initiative on the other.[24]

Finally, on the subject of patents, there is the corollary issue of whether Artificial Intelligence itself can be the subject matter of a patent application. On this issue the EPO, acknowledging that the position may be different in various jurisdictions, gave the following guidance:[25]

“The EPO has responded to the emergence of AI in patent applications by refining its approach to patentability of inventions involving AI.

AI is considered a branch of computer science, and  therefore, inventions involving AI are considered “computer-implemented inventions” (CII). In this context, the Guidelines for Examination in the EPO, F-IV, 3.9 define the term CII as inventions which involve computers, computer networks or other programmable apparatus, whereby at least one feature is realised by means of a program.

Computer-implemented inventions are treated differently by patent offices in different regions of the world. Article 52(2)(c) of the European Patent Convention (EPC) excludes computer programs “as such” from patent protection. Nevertheless, inventions involving software are not excluded from patentability as long as they have a technical character.

Over the years, the case law of the EPO Boards of Appeal has clarified the implications of Article 52 EPC, establishing a stable and predictable framework for the patentability of computer-implemented inventions, including inventions related to AI. This framework is reflected in the EPO’s Guidelines for Examination.

Like any other invention, in order to be patentable under the EPC, a computer-implemented invention must not be excluded from patentability (Article 52(2) and (3) EPC) and must fulfil the patentability requirements of novelty, inventive step and susceptibility of industrial application (Article 52(1) EPC). The technical character of the invention is important when assessing whether these requirements are met.

The same approach applies to computer-implemented inventions related to AI (see, in particular, the Guidelines for Examination in the EPO, G-II, 3.3.1 Artificial intelligence and machine learning).

AI is based on computational models and mathematical algorithms which are per se of an abstract nature. Nevertheless, patents may be granted when AI leaves the abstract realm by applying it to solve a technical problem in a field of technology. For example, the use of a neural network in a heart-monitoring apparatus for the purpose of identifying irregular heartbeats makes a technical contribution. The classification of digital images, videos, audio or speech signals based on low-level features (e.g. edges or pixel attributes for images) are other typical technical applications of AI. Further examples are listed in the Guidelines for Examination in the EPO, G-II, 3.3 Mathematical methods.

In addition, a technical solution to a technical problem can also be provided when the invention is directed to a specific technical implementation of AI, i.e. one which is motivated by technical considerations of the internal functioning of a computer (e.g. a specific technical implementation of neural networks by means of graphics processing units (GPUs)).

The EPC thus enables the EPO to grant patents for inventions in many fields of technology in which AI finds a technical application. Such fields include, but are not limited to, medical devices, the automotive sector, aerospace, industrial control, additive manufacturing, communication/media technology, including voice recognition and video compression, and also the computer, processor or computer network itself.   

The EPO also engages with its external stakeholders around the topic of patenting AI. In two recent events, Artificial intelligence: a patent practitioner’s perspective and Artificial intelligence: an examiner’s perspective, EPO experts exchanged ideas with EPO users as well as with experts from national patent offices on several AI examples covering a broad range of technologies.”[26]

Gaming

Likewise, within the context of video gaming, Farmaki points out that several questions have been raised between IP and AI including whether Intellectual Property Rights should be conferred on AI and what works should be considered original. The World Intellectual Property Organisation (WIPO) has already seen these issues discussed and that author calls for a conversation on AI and IP and frontier technologies, like GPT 4 or the more recent GPT-4o.[27]

The World Intellectual Property Organisation (WIPO) has published on the issue[28] though prior to the release of LLMs like Chat GPT. [29] On the issue of copyright they state:

“AI applications are increasingly capable of generating literary and artistic works. This capacity raises major policy questions for the copyright system, which has always been intimately associated with the human creative spirit and with respect and reward for, and the encouragement of, the expression of human creativity. The policy positions adopted in relation to the attribution of copyright to AI-generated works will go to the heart of the social purpose for which the copyright system exists.”[30]

Conclusion

This chapter has considered other intellectual property matters not covered in chapter 2. It has focused in particular on patent applications. It has shown that LLMs already play a part in the patent application process – though currently more in terms of drafting descriptions or claims based on material inputted by the user, and not the additional step of actually developing its own patent claims. That may come in the future, however, and, if it does, it will require consideration by the courts or by patent-granting authorities. At present an Artificial Intelligence cannot invent: based on the precedents presented by the DABUS patent applications. 


[1] Novelli, Claudio and Casolari, Federico and Hacker, Philipp and Spedicato, Giorgio and Floridi, Luciano, Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity (January 14, 2024). Available at SSRN: https://ssrn.com/abstract=4694565 or http://dx.doi.org/10.2139/ssrn.4694565 at 20.

[2] https://ipkitten.blogspot.com/2023/10/use-of-large-language-models-in-patent.html#:~:text=LLM%20possesses%20the%20remarkable%20ability,still%20have%20some%20considerable%20limitations.

[3] Abbott, Autonomous Machines and their Inventions [2017] Mitt 429 see https://openresearch.surrey.ac.uk/esploro/outputs/99516715102346

[4] Truong, Kenny. “Expanding Nonobviousness to Account for AI-Based Tools.” Journal of the Patent and Trademark Office Society, vol. 104, no. 1, January 2024, pp. 51-70. HeinOnline, https://heinonline-org.ucd.idm.oclc.org/HOL/P?h=hein.journals/jpatos104&i=80.

[5] Tran, Jasper L. “Of Artificial Intelligence and Patent Litigation.” Journal of the Patent and Trademark Office Society, vol. 104, no. 1, January 2024, pp. 43-50. HeinOnline, https://heinonline-org.ucd.idm.oclc.org/HOL/P?h=hein.journals/jpatos104&i=53 at 43.

[6] https://www.ft.com/content/7f62f916-6083-4efe-aa78-8f412d03e213

[7] Udupa, Vaishali, and Devon Kramer. “The Integration of AI and Patents.” Journal of the Patent and Trademark Office Society, vol. 104, no. 1, January 2024, pp. 1-4. HeinOnline, https://heinonline-org.ucd.idm.oclc.org/HOL/P?h=hein.journals/jpatos104&i=14at 4.

[8] Engel, .A. (2020). Can a Patent Be Granted for an AI-Generated Invention?. “GRUR International: GRUR Journal of European and International IP Law (Formerly: Gewerblicher Rechtsschutz und Urheberrecht, Internationaler Teil)”, 69(11), 1123-1129.

[9] Ibid at 1.

[10] Ibid. 

[11] Ibid citing Leo Kelion “Ai system ‘should be recognised as inventor’” BBC (London, 1 August 2019) https://www.bbc.com/news/technology-49191645

[12] Ibid. 

[13] It’s interesting, however, a German reference to the ECJ for a preliminary ruling in 2021 which stated: The open-ended term ‘perspn’, which is often used in the Charter, also includes nature or individual ecosystems […] It would also be contradictory to grant legal subjectivity to artificial intelligence, as intended at European level, but not to ecosystems.” Case C-388/21 Request for a Preliminary Ruling, Landgericht Erfurt (Germany). Presumably, the author of this reference was referring to the EU proposal to assign legal personality to a robot. https://www.euractiv.com/section/digital/opinion/the-eu-is-right-to-refuse-legal-personality-for-artificial-intelligence/ see https://curia.europa.eu/juris/showPdf.jsf?text=Artificial%2BIntelligence&docid=245242&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=9209601

[14] Thaler v Comptroller-General of Patents, Designs and Trademarks UKSC/2021/0201; [2023] UKSC 49

[15] At para 56 of the judgment of the Supreme Court (per Lord Kitchin; Lord Hodge, Lord Hamblen; Lord Leggatt; and Lord Richards concurring)

[16] Ibid. 

[17] Engel, .A. (2020). Can a Patent Be Granted for an AI-Generated Invention?. “GRUR International: GRUR Journal of European and International IP Law (Formerly: Gewerblicher Rechtsschutz und Urheberrecht, Internationaler Teil)”, 69(11), 1123-1129 at 1127. This is backed up by other authors who comment that: “Patent law takes a less marked anthropocentric approach [than copyright], but even here, the so-called inventive step-which, together with novelty and industrial applicability, is required for an invention to be patentable-is normatively defined in terms of non-obviousness to a person skilled in the art. The very existence of moral rights (such as the so-called right of paternity) safeguarding the personality of the author or inventor suggests that the subject of protection can only be human.” Novelli, Claudio and Casolari, Federico and Hacker, Philipp and Spedicato, Giorgio and Floridi, Luciano, Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity (January 14, 2024). Available at SSRN: https://ssrn.com/abstract=4694565 or http://dx.doi.org/10.2139/ssrn.4694565 at 20.

[18] Ibid.

[19] https://www.epo.org/en/news-events/in-focus/ict/artificial-intelligence

[20] Ibid.

[21] Daria Kim, ‘AI-Generated Inventions’: Time to Get the Record Straight?, GRUR International, Volume 69, Issue 5, May 2020, Pages 443–456, https://doi.org/10.1093/grurint/ikaa061

[22] https://www.epo.org/en/legal/guidelines-epc

[23] Gustavo Ghidini, IP and AI – for a Balanced, Non-Protectionist Stance, GRUR International, Volume 73, Issue 11, November 2024, Pages 1017–1018, https://doi.org/10.1093/grurint/ikae086 at p. 1017

[24] Ibid at 1017.

[25] https://www.epo.org/en/news-events/in-focus/ict/artificial-intelligence

[26] Ibid.

[27] Despoina Farmaki, The player, the programmer and the AI: a copyright odyssey in gaming, Journal of Intellectual Property Law & Practice, Volume 18, Issue 12, December 2023, Pages 920–928, https://doi.org/10.1093/jiplp/jpad095

[28] WIPO Revised Issues Paper on Intellectual Property Policy and Artificial Intelligence 21 May 2020 WILP/IP/AI/2/GE/20/1 RE: https://www.wipo.int/edocs/mdocs/mdocs/en/wipo_ip_ai_2_ge_20/wipo_ip_ai_2_ge_20_1_rev.pdf

[29] https://www.wipo.int/edocs/mdocs/mdocs/en/wipo_ip_ai_2_ge_20/wipo_ip_ai_2_ge_20_1_rev.pdf. In their report they refer to a technology gap and state:

“The number of countries with expertise and capacity in AI is limited. At the same time, the technology of AI is advancing at a rapid pace, creating the risk of the existing technology gap being exacerbated, rather than reduced, with time. In addition, while capacity is confined to a limited number of countries, the effects of the deployment of AI are not, and will not be, limited only to countries that possess capacity in AI.”

[30] https://www.wipo.int/edocs/mdocs/mdocs/en/wipo_ip_ai_2_ge_20/wipo_ip_ai_2_ge_20_1_rev.pdf

Chapter 4

Artificial Intelligence and Data Protection

Introduction

This chapter looks specifically at the issue of Data Protection. In some respects there are similarities with our previous discussion on copyright: Data Protection compliance will reach into the training regime of the Large Language Model (LLM) to examine the data used to train it and will also consider the outputs to user commands and whether there are sufficient safeguards around the ensuing process of delivering responses. There are other issues too including age verification procedures, transparency, accuracy of the data delivery, and the legal basis for processing. Finally the Irish Data Protection Commission has issued a note on the subject in July 2024 and this will be outlined as well as Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models. 

Overview

While concepts of Data Protection and relatively recent GDPR developments in that field are beyond the scope of this book there are overarching issues in respect of Data Protection which are touched by Artificial Intelligence – and this is likely to develop further. One of the issues concerns the sheer breadth of data now available to public authorities pursuant to implementation of systems powered by AI.  Advocate General Pitruzzella in an opinion[1] put this matter well when he said:

“The questions on which the Court is required to rule in this case embody one of the principal dilemmas of contemporary liberal democratic constitutionalism: what balance should be struck between the individual and society in this data age in which digital technologies enabled huge amounts of personal data to be collected, retained, processed and analysed for predictive purposes? The algorithms, big data analysis and artificial intelligence used by public authorities can serve to further and protect the fundamental interests of society to a hitherto unimaginable degree of effectiveness – from the protection of public health to environmental sustainability, from combating terrorism to preventing crime, and serious crime in particular. At the same time, the indiscriminate collection of personal data and the use of digital technologies by public authorities may give rise to a digital panopticon – where public authorities can be all-seeing without being seen – an omniscient power able to oversee and predict the behaviour of each and every person and take the necessary measures, to the point of the paradoxical outcome imagined by Steven Spielberg in the film Minority Report, where the perpetrator of a crime that has not yet been committed is deprived of his liberty. It is well known that in some countries society takes precedence over the individual and the use of personal data legitimately enables effective mass surveillance aimed at protecting what are considered to be fundamental public interests. In contrast, European constitutionalism, whether national or supranational, in which the individual and the individual’s liberties hold centre stage, imposes a significant obstacle to the advent of a mass surveillance society, especially now that the protection of privacy and personal data have been recognised as fundamental rights. To what extent, however, can that obstacle be set up without seriously undermining certain fundamental interests of society – such as those cited above – which may nevertheless be bound up with the constitution? This is at the heart of the relationship between the individual and society in the digital age. That relationship, on the one hand, calls for delicate balancing acts between the interests of society and the rights of individuals, premised on the paramount importance of the individual in the European constitutional tradition, and, on the other, makes it necessary to establish safeguards against abuse. Here, too, we have a contemporary twist on a classic theme of constitutionalism since, as The Federalist categorically asserted, men are not angels, which is why legal mechanisms are needed to constrain and monitor public authorities.”[2]

Italian Data Protection Authority 

In 2023 the Italian data authority (Garante per la protezione dei dati personali) issued a temporary block on Chat GPT.[3]  The measure was interim and adopted on 30th March 2023 wherein the authority ordered Open AI to stop Chat GPT’s processing of personal data relating to individuals located in Italy, pending the outcome of an investigation into the privacy practices of Chat GPT. 

The investigation is reported to have commenced after the authority became aware of a breach exposing payment details and sought more information from Open AI about this. 

The authority issued the following statement:[4]

“Artificial intelligence: stop to ChatGPT by the Italian SA

Personal data is collected unlawfully, no age verification system is in place for children

No way for ChatGPT to continue processing data in breach of privacy laws. The Italian SA imposed an immediate temporary limitation on the processing of Italian users’ data by OpenAI, the US-based company developing and managing the platform. An inquiry into the facts of the case was initiated as well.

A data breach affecting ChatGPT users’ conversations and information on payments by subscribers to the service had been reported on 20 March. ChatGPT is the best known among relational AI platforms that are capable to emulate and elaborate human conversations.

In its order, the Italian SA highlights that no information is provided to users and data subjects whose data are collected by Open AI; more importantly, there appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies.

As confirmed by the tests carried out so far, the information made available by ChatGPT does not always match factual circumstances, so that inaccurate personal data are processed.  

Finally, the Italian SA emphasizes in its order that the lack of whatever age verification mechanism exposes children to receiving responses that are absolutely inappropriate to their age and awareness, even though the service is allegedly addressed to users aged above 13 according to OpenAI’s terms of service.

OpenAI is not established in the EU, however it has designated a representative in the European Economic Area. It will have to notify the Italian SA within 20 days of the measures implemented to comply with the order, otherwise a fine of up to EUR 20 million or 4% of the total worldwide annual turnover may be imposed.”[5]

Roma, 31 March 2023

The following alleged GDPR violations were cited:[6]

Open AI was given 20 days from the date of receiving the measure to respond to the alleged breaches and provide details of any corrective measures.  This was the first action of its kind taken by a data protection authority in the EU in relation to the data processing of a large language model, and it deals in particular with the issue of personal data implications which arise from the “training” of the model.

One source considered that the order failed to take account of the legitimate interests of Open AI in the collection and use of personal data by the AI model for the purposes of training.[7] It was also noted that the interim measure also did not draw a distinction between using personal data to build or train a large language model and the inputting of personal data into a model already on the market.[8]

The service was restored in Italy after Open AI said it fulfilled the demands that were made of it.[9]

“’ChatGPT is available again to our users in Italy,’ San Francisco-based OpenAI said by email. ‘We are excited to welcome them back, and we remain dedicated to protecting their privacy.’”[10]

The measures taken were said to include adding information on its website about how it collects and uses data that trains the algorithms, providing EU users with a new form for objecting to having their data used for training, and adding a tool to verify users’ ages when signing up. Some Italian users were reported to have shared screenshots of the changes, “including a menu button asking users to confirm their age and links to the updated privacy policy and training data help page.”[11]

One source commenting on the events in Italy observed the following:

“On an abstract level, a Generative AI models (sic) preserves privacy if it was trained in a privacy-sensitive way, processes prompts containing personal data diligently, and discloses information relating to identifiable persons in appropriate contexts and to authorised individuals only. Privacy and data protection are not binary variables and, therefore, what is the right context or the right recipients of the information is a matter of debate. In the context of LLMs, these debates are further complicated due to the diverse purposes, applications, and environments they operate in.”[12]

Also in Poland a different complaint was made by an individual who claimed Chat GPT was hallucinating in respect of information it delivered about him.[13] This was followed by a further complaint in Austria filed in respect of incorrect information concerning a client of campaigner noyb.[14]

A consideration of legal issues arising

The sources say there are at least eight data protection issues which arise:[15]

  1. The Legal Basis for AI training on personal data: Consent and the balancing test and Sensitive Data
  2. The Legal Basis for prompts containing personal data
  3. Information requirements
  4. Model inversion, data leakage, and the right to erasure
  5. Automated decision-making
  6. Protection of minors
  7. Purpose limitation and data minimization
  8. Accuracy

Each of these issues will be considered in turn below. It’s worth noting that one author in an analysis “identifies serious challenges in the application of the GDPR to LLMs” before concluding that with no “significant updates to the relevant GDPR provisions in sight” pragmatic solutions are required under the current provisions.[16] That author explain how an LLM works to give some context to the discussion:

“Unlike traditional software that follows explicit rule-based processes, GenAI systems do not have data silos to store and retrieve data in the traditional sense. Instead, they operate through a complex deep neural network architecture that resembles a vast interconnected web of computational nodes. These networks typically comprise tens or hundreds of layers, each containing many neurons: in the case of current LLMs, billions or trillions in total. Each neuron within these layers performs a series of operations that are intricately woven together to form the computational framework of the model.

A core principle of GenAI systems is to predict the most likely token in a series of tokens via patterns, generalizations, and relationships learned during training. For LLMs, tokens are words, parts of words, or other characters. When an LLM has produced the next token, that token becomes part of the input sequence for the subsequent prediction. In this context, a fundamental breakthrough that has enabled the current progress of LLMs and other GenAI systems is the transformer architecture. It has introduced an ‘attention’ mechanism that weighs the importance of different parts of the input differently when making a prediction. This mechanism allows the transformer to capture the immediate and broader context more effectively and dynamically across long sequences of input. This also includes nuances in language such as sarcasm, idioms, and complex grammatical structures.”[17]

The author notes that there are three distinct phases where LLM processing activities can occur: training, storage and use. Each of these phases potentially requires a different legal basis. 

The Legal Basis for AI training on personal data: Consent and the balancing test and Sensitive Data

A legal basis is required under Article 6 GDPR for every processing operation. This is the training phase noted by Bartels above. LLMs are covered by this requirement as their services are offered in the EU. Using training data is illegal under GDPR unless a specific legal basis applies to the processing. Consent constitutes the most prominent legal basis under GDPR – Article 6(1)(a). However, suffice it to say, considering the vast amount of data in the training data it is unfeasible that consent in every instance can be given. In an article Kuru points to the feasibility of seeking consent in every instance[18] and considers such a stance is “practically not possible”. Consequently, says the author, the matter rests on either of the provisos in the GDPR – in Article 6(1)(f) or Article 9(2). 

Article 6(1)(f) GDPR considers the legitimate interests of the controller and whether these are overridden by the rights and freedoms of the data subjects.

“Whether the balancing test provides a legal basis is, unfortunately, a matter of case-by-case analysis.[19]Generally, particularly socially beneficial applications will speak in favour of developers; similarly the data subject is unlikely to prevail if the use of the data for AI training purposes could reasonably be expected by data subjects.”[20]

Bartels agrees that the balancing test is the appropriate test:

“With respect to the training of LLMs with information found on the internet through large-scale scraping or from sources such as books, a justification based on consent or contract performance is generally not an option. In such a case, it is generally impossible for developers to obtain consent or contracts from each data subject. The only option for lawful data processing during training is a balancing of interests according to Art. 6(1)(f) GDPR, where the legitimate interest in the processing outweighs the interests of the data subjects.”[21]

Kuru notes that Open AI updated its privacy policy, effective 15 February 2024, and the updated policy refers to the legitimate interests of Open AI, third parties, and broader society as the legal basis for processing several types of personal data including the “Data [Open AI] Receive From Other Sources” which contain “information that is publicly available on the internet”, to train OpenAI’s models.[22] However, continues the author, these statements only clarify the legal basis for processing the personal data of Chat GPT users – not to non-users whose data is publicly available. An article published on Open AI’s website states:

“We use training information lawfully . (…) and the primary sources of this training information are already publicly available. For these reasons, we base our collection and use of personal information that is included in training information on legitimate interests under privacy laws like the GDPR.”[23]

As part of that analysis the court might examine whether it is possible to build these models without using copyrighted materials – a point developers of these models argue – but which some sources are beginning to question.[24] This being the case Kuru considers Open AI may not satisfy Article 6(1)(f) – going as far as describing it as “very unlikely”.[25]

Another route for an exception for this type of processing being considered concerns sensitive data: where data items are protected by Article 9 GDPR.[26] The issue here is “the controllers ability to infer sensitive traits based on the available data – irrespective of whether the operator intends to make that inference.”[27] Machine-learning techniques such as those in issue in respect of training of LLMs allow for the deduction of those protected categories. This would prima facie bring the processing into Article 9 GDPR wherein developers would need to argue the specific exception in Article 9(2) GDPR should apply. A research exemption in Article 9(2)(j) is limited to building models for research purposes and cannot apply to those exploited commercially. [28]

One source notes that Open AI provides information on its website describing the way it processes data and on corresponding rights of users and non-users and has clarified that for the processing of users’ data either consent or legitimate interest is invoked. Users can exercise their rights, to access, deletion, or correction of personal information, or object to the processing of their data via their GPT Account or via a publicly accessible form. As regards the protection of minors Open AI has established an age gate and age verification tools.[29]

Article 6 GDPR states:

Point (f) of the first subparagraph shall not apply to processing carried out by public authorities in the performance of their tasks.

(2) Member States may maintain or introduce more specific provisions to adapt the application of the rules of this Regulation with regard to processing for compliance with points (c) and (e) of paragraph 1 by determining more precisely specific requirements for the processing and other measures to ensure lawful and fair processing including for other specific processing situations as provided for in Chapter IX.

The purpose of the processing shall be determined in that legal basis or, as regards the processing referred to in point (e) of paragraph 1, shall be necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller. That legal basis may contain specific provisions to adapt the application of rules of this Regulation, inter alia: the general conditions governing the lawfulness of processing by the controller; the types of data which are subject to the processing; the data subjects concerned; the entities to, and the purposes for which, the personal data may be disclosed; the purpose limitation; storage periods; and processing operations and processing procedures, including measures to ensure lawful and fair processing such as those for other specific processing situations as provided for in Chapter IX. The Union or the Member State law shall meet an objective of public interest and be proportionate to the legitimate aim pursued.

Article 9 GDPR states:

Bartels considers that a broad interpretation of Article 9 is possible in this context:

“The most promising exception might be the processing for reasons of substantial public interest (Art. 9(2)(g) GDPR). Because of this relatively open wording, it could be argued that the benefits of GenAI are particularly significant and will ultimately serve the public at large in areas such as research, health, and education. Such a broad interpretation of Art. 9(2)(g) GDPR should be justified as long as the specific risk of processing sensitive data does not manifest itself in the training of LLMs, for example because such data is already publicly available on the internet. This requires, however, that the developer takes reasonable measures to filter out personal data from the training data that are still subject to the predominant privacy interests of the respective data subjects, for example in cases where such information has been illegally placed on the internet.”[31]

Kuru, who looked at this issue, let’s remember, from the perspective of publicly accessible information considered that the two possible available exceptions rest in either 9(2)(a) or 9(2)(e).[32] He examines both in turn considering that 9(2)(a) presented difficulties within the context of appropriate mechanisms for withdrawal of consent as well as the practical difficulties of providing information on processing activities to all of the effected individuals; as regards 9(2)(e) the author notes that even in cases where data subjects choose to make their personal data publicly accessible the CJEU requires such a decision to be made with full knowledge of the facts citing Meta v Bundeskartellamt.[33]   

Nor is this issue purely academic. In June 2024 it was announced that Meta, the parent company of Facebook, Instagram and Whatsapp paused plans to use personal data to train artificial intelligence models after concerns were raised by the Irish Data Protection Commission (DPC). Privacy campaigners had argued that Meta’s previously publicised move[34] to capture this data for training may be in breach of GDPR. The DPC issued a statement on the issue in the wake of the announced pause.[35] Subsequently Meta criticised the EU regulatory position and said it ran the risk of Europe falling behind as cutting-edge products were released elsewhere while they remained unavailable in the EU.[36] In July 2024 the platform X, formerly Twitter, was subject to an inquiry by the same authority after users complained they had been automatically opted-in to allow their data be used to train an AI model called Grok made by xAI.[37] In August it was announced that court proceedings had been initiated with the DPC stating it hadn’t received the co-operation of X after it requested it to cease processing the personal data in question.[38] A few days later X gave an undertaking in the High Court to stop using EU user data to train its AI tool.[39] This was made permanent in September 2024. In a statement the DPC said it was referring the matter to the European Data Protection Board to adjudicate on and set rules.[40] This was seen as a step back by the Irish authority and a cession to the more federal European body.[41]

The moves by the DPC followed in the wake of an information note issued by it in July 2024.[42] In it the DPC refer to the training phase and state “there may be a tendency to use large amounts of personal data during any training phases, sometimes unnecessarily and without your knowledge, agreement or permission”. This reference to training phase may refer to the users first interaction with the Artificial Intelligence where information, or details, such as preferences, are provided by the user, but, it could also refer to the training of the Artificial Intelligence using users personal data and this is certainly the issue that arose with respect to Meta and X. The DPC also refers to other possible data protection issues which commonly arise in respect of AI including “issues for you or others arising from the accuracy or retention of personal data used (or generated) – for example, in situations where the outputs of AI systems are used as part of a process to make decisions” and “issues for you, if models based on your personal data, are shared with others for  purposes you are not aware of or do not agree with or who do not properly secure the data”. The Commission also refers to the concept of “incomplete” training data and suggests this may have knock-on consequences in terms of any decision-making undertaken by the AI which “may have been caused by “biases in AI systems”. 

Guidance is given to organisations: 

Advice for AI Product Designers, Developers, Providers was also given.[43] The advice was followed later in 2024 by an Opinion on certain data protection aspects related to the processing of personal data in the context of AI models.[1] In its Opinion the European Data Protection Board states that GDPR would not apply to processing of anonymised data by an Artificial Intelligence model in circumstances where the deployer of the model could demonstrate anonymity applied – though in circumstances where a “mere assertion” in this respect would not be sufficient.[2] This would include an assessment by the relevant State Authority of the appropriateness of legitimate interest as a legal basis for the processing where this is put forward by the deployer.[3] The degree of risk raised by the processing may determine the level of detail required to satisfy the State Authority.[4] Further the Opinion states as follows:

“Whether the development and deployment phases involve separate purposes (thus constituting separate processing activities) and the extent to which the lack of legal basis for the initial processing activity impacts the lawfulness of the subsequent processing, should be assessed on a case-by-case basis, depending on the context of the case.”[5]


[1] https://www.edpb.europa.eu/our-work-tools/our-documents/opinion-board-art-64/opinion-282024-certain-data-protection-aspects_en

[2] Ibid. Para 134

[3] Ibid. Para 132

[4] Ibid. Para 130

[5] Ibid. Para 122

The Legal Basis for prompts containing personal data

This issue arises in respect of personal data that has been actually inputted into an LLM. Users may include personal data about themselves in prompts. Consent may work as a legal basis as users have to individually register for use of the LLM. Controller may request consent as part of the application process. Where prompts are entered concerning the personal data of third parties users cannot ordinarily validly consent for another person. This type of processing falls within the “use” phase noted by Bartels above. He states:

“As regards the legal basis for the processing of personal data in the case of use, it is less possible to make general statements. This is because the circumstances of each case can vary greatly, including the nature and extent of personal data being processed, and controllership. However, during use, the two following situations regularly emerge and require legal assessment: First, either the input provided by a user or the output generated by the model contains personal data of third parties. Second, the input data provided by the user contains personal data of the user itself or data of third parties that are not publicly available, and the developer uses such data to further train the model.

In the first situation, the third party (data subject) could have consented to such data processing (Arts. 4(11), 6(1)(a) GDPR), in particular through communication with the user, i.e., the controller, in the context of a contractual relationship between the data subject and the user (e.g., via a consent form or a consent banner).99 In addition, the processing of the third party’s personal data can be justified if such processing, beyond the mere context of a contractual relationship, is necessary for the performance (Art. 6(1)(b) GDPR). This could be the case where the user provides services to the data subject that require the use of the model, for example, where it is common in the future for businesses to provide necessary communication via LLMs. However, Art. 6(1)(b) GDPR is a rather unreliable legal basis because it must be examined in each individual case whether the processing is indeed instrumental to the performance of the contract.100 In addition, a legal basis might be provided by the balancing test (Art. 6(1)(f) GDPR) which again requires that the legitimate interests of the controller or third parties in the processing outweigh the interests of the data subject.101 The balancing test can result in the processing being justified where the user pursues reasonable aims in using the model and the risks to the data subject are limited. This would be different in cases where the user intends to harm the data subject, for example if the user uses the model to create spam emails tailored to the data subject.

In the second situation described above, where the developer uses input data provided by the user and data generated by the model (identifying the user and/or third parties) for further training, the developer is likely to be at least one of the controllers,102 in particular with regard to the subsequent processing phase of the further training of the model on the data. Regarding personal data identifying the user itself, Art. 6(1)(b) GDPR is unlikely to apply because the subsequent improvement of the model is, according to the criteria of case law, generally not necessary for the performance of the individual contract with the user.103 Further, a simple clause within terms of use is generally not considered to provide sufficient informed consent (Arts. 4(11), 6(1)(a) GDPR) under the criteria of case law.104 Therefore, the most likely legal basis for such use of personal data, both of the user and of third parties, collected during the use of the model is again Art. 6(1)(f) GDPR. However, in contrast to the initial training of an LLM, the balancing test could lead to a negative result.105 In the case of subsequent training and fine-tuning with user data, there are fewer legitimate interests to justify the data processing. In particular, as the public is likely to have less of an interest in the (mundane) personal data provided by the user during use (compared to the information found in the initial training data sets, e.g., Wikipedia), the collective freedom of information often referred to by the CJEU in its previous case law does not apply to the same extent in favour of the developer. In addition, the processing of personal data provided by user input is likely to affect the data subject to a much greater extent, as such user input data can be considerably more worthy of protection than data that is already freely available on the internet. This is the case, for example, when a user asks a chatbot about medical conditions the user might have or seeks advice on personal circumstances.”[44]

Information requirements

Article 12 to 15 GDPR detail the obligations regarding information that must be provided to data subjects and these obligations are described as posing “unique challenges for Generative AI due to the nature and scope of data they process”.[45] Article 14 GDPR for instance addresses the need for transparency in instances where personal data is not directly collected from the individuals concerned. An exemption under Article 14(5) GDPR for disproportionate effort is possible. Factors may include the volume of data subjects, the data’s age, and implemented safeguards.[46]

Article 14 GDPR states:

Model inversion, data leakage, and the right to erasure

With regards to reconstructing the LLM from its training data (model inversion) there are issues that arise around “data leaks” and the right to be forgotten under Article 17 GDPR. The concept of memorisation, already mentioned earlier in this chapter within the context of The New York Times litigation, may even result in the LLM itself as qualifying as personal data.[47] Broadly speaking this category describes what Bartels refers to as the “storage” phase of LLM processing of personal data. He states:

“Whereas to date only very limited case law and legal literature on the definition of ‘storage’ exist, legal commentaries emphasise that the term implies the controller’s aim to store the informational content of personal data in an embodied form on a data carrier so as to retrieve this data at a later point in time. However, this is not exactly the relationship between the training data and the output data of GenAI systems. In particular, LLMs do not simply retrieve training data, but generate new output based on probabilities. The training data was used to train the model, but is not embodied within it in any easily retrievable form. In contrast to data carriers such as notebooks and hard drives, LLMs contain statistics.”[48]

Automated decision-making

This concept is covered under article 22 GDPR and generally prohibits automated individual decision-making, including profiling, which produces legal effects concerning an individual or similarly significantly affects them, unless an exception applies. Where LLMs have downstream application in areas such as credit scoring (High-risk under Recital 37 EU AI Act; Annex III) this prohibition becomes even more significant. Options for the provider of such a service might include obtaining an explicit consent or where it can be shown that the automated processing is necessary for contractual purposes.[49]

Article 22 GDPR states:

Protection of Minors

The issue here is the dissemination of age-appropriate content especially concerning any outputs that may not be suitable for minors. Article 8(2) GDPR states that a controller must undertake “reasonable effort to verify” that the consent of the holder of parental responsibility is obtained. The actions taken by the Italian Authority in this respect, given above, should be noted.

Article 8 GDPR states:

Member States may provide by law for a lower age for those purposes provided that such lower age is not below 13 years.

Purpose limitation and data minimization

Under Article 5(b) and 5(c) GDPR data controllers should collect personal data only as relevant and necessary for a specific purpose. This may require developers to train LLMs on smaller datasets, or, as an alternative, strengthen privacy-preserving measures proportionate to dataset size. Bartels states:

“[T]he use of large amounts of data to train GenAI models might conflict with the principles of data minimisation (Art. 5(1)(c) GDPR) and purpose limitation (Art. 5(1)(b) GDPR).

However, Art. 5(1)(c) GDPR does not set any absolute limit on the amount of data that may be processed. Instead, the processing of personal data must be adequate, relevant, and limited to what is necessary in relation to the purposes for which the data are processed. This provision, once again, leads to a proportionality test. In particular, the data processing must be necessary and appropriate.116 This is likely to be the case for the training of GenAI models.117 After all, a greater amount of data generally results in a higher quality model. A less restrictive interpretation of the principle of data minimisation is therefore also supported by the principle of data accuracy.118 However, the developer should ensure that only data sets containing personal data that add value are used, which might not be the case for certain categories of social media, for example.

Against this background, the principle of purpose limitation (Art. 5(1)(b) GDPR) also does not generally prevent the training and use of GenAI models.119 Article 5(1)(b) GDPR requires that the purposes to be pursued with the processing of personal data must be determined at the time the personal data is collected and that any changes to such purposes are to be assessed in the context of the legal basis of the processing (in particular Art. 6(1)(f) GDPR), with consideration also of additional compatibility criteria under Art. 6(4) GDPR.120 These include any logical link between the initial and subsequent purposes, the reasonable expectations of the data subjects, and the possible consequences of the intended further processing. In the context of GenAI models, the application of Art. 6(4) GDPR for purpose changes is typically required. In particular, while personal data may first be published on the internet for any reason and is then scraped from the internet to train a model, such data may subsequently be used for a variety of applications of the model that are not fully foreseeable in their totality and specificity at the time of publication or training. However, a proportionate and risk-based understanding of Arts. 5(1)(b), 6(4) GDPR does not constitute an insurmountable obstacle. This is because, in particular, the inclusion of any natural person’s data in a training data set generally does not result in a significant impact on that person. It must remain possible for a developer to determine subsequent processing purposes in a more abstract manner and, to the extent that future use cases of the model cannot be specified clearly and distinctly enough at the time of the data collection,121 to reasonably add broadly compatible and roughly foreseeable processing purposes under Art. 6(4) GDPR (particularly for use cases of LLMs).122 However, the developer shall enable the data subjects to understand the purposes for which their personal data is processed and the risks involved. Therefore, the developer should make reasonable efforts to provide the relevant information in a transparent and publicly accessible manner.”[50]

Accuracy

Bartels also refers to the issue of accuracy of the data and whether the “hallucinations” of the LLM interfere with this right – carried in Art. 5(1)(d) GDPR. He states:

“[S]ome legal commentators and the Italian Garante consider that the tendency of current models to invent facts in the event of uncertainty (‘hallucinations’) almost necessarily violates the principle of accuracy (Art. 5(1)(d) GDPR).111 Indeed, under certain circumstances, inaccurate output data can have significant negative consequences for data subjects. This could be the case, for example, if a user disseminates very unfavourable information about a third party that is based on the (incorrect) output of a model. Moreover, courts are unlikely to assume that the output data is in principle correct simply because the underlying statistical methods are as accurate as possible. As the principle of accuracy aims to protect individuals from the dissemination of incorrect information about them, the accuracy of the data is more likely to be assessed on the basis of an objective understanding of the text generated.112

However, Art. 5(1)(d) GDPR does not strictly require that all personal data must always be correct from the outset. For example, it has long been recognised that digital services, search engines, or other texts available on the internet113 can contain incorrect personal data and that this is generally not a severe violation of the principle of data accuracy.114 Although data subjects may have rights to rectification and erasure (Arts. 16, 17 GDPR), online services generally do not have to be shut down. Otherwise, it would be very difficult to offer digital services at all, especially online platforms or search engines that mediate third-party content. Instead, Art. 5(1)(d) GDPR merely requires controllers to use reasonable efforts to ensure data accuracy (cf. Recital 71(6) GDPR). In addition, developers may inform users of the possibility that the output data might be factually incorrect115 and provide them with the opportunity to suggest corrections, which can then be used for training.”[51]

Other notable compliance provisions for LLM developers include the obligation to keep records of processing activities under Article 30 GDPR and the data protection impact assessment under Article 35 GDPR.  

Article 54 of the EU AI Act states:

1. In the AI regulatory sandbox personal data lawfully collected for other purposes may be processed solely for the purposes of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met:

(a) AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person governed by public law or by private law and in one or more of the following areas:

(ii) public safety and public health, including disease detection, diagnosis prevention, control and treatment and improvement of health care systems;

(iii) a high level of protection and improvement of the quality of the environment, protection of biodiversity, pollution as well as green transition, climate change mitigation and adaptation;

(iiia) energy sustainability

(iiib) safety and resilience of transport systems and mobility, critical infrastructure and networks;

(iiic) efficiency and quality of public administration and public services;

(b) the data processed are necessary for complying with one or more of the requirements referred to in Title III, Chapter 2 where those requirements cannot be effectively fulfilled by processing anonymised, synthetic or other non-personal data;

(c) there are effective monitoring mechanisms to identify if any high risks to the rights and freedoms of the data subjects, as referred to in Article 35 of Regulation (EU) 2016/679 and in Article 39 of Regulation (EU) 2018/1725, may arise during the sandbox experimentation as well as response mechanism to promptly mitigate those risks and, where necessary, stop the processing;

(d) any personal data to be processed in the context of the sandbox are in a functionally separate, isolated and protected data processing environment under the control of the prospective provider and only authorised persons have access to that those data;

(e) Providers can only further share the originally collected data in compliance with EU data protection law. Any personal data crated in the sandbox cannot be shared outside the sandbox;

(f) any processing of personal data in the context of the sandbox do not lead to measures or decisions affecting the data subjects nor affect the application of their rights laid down in Union law on the protection of personal data;

(g) any personal data processed in the context of the sandbox are protected by means of appropriate technical and organisational measures and deleted once the participation in the sandbox has terminated or the personal data has reached the end of its retention period;

(h) the logs of the processing of personal data in the context of the sandbox are kept for the duration of the participation in the sandbox, unless provided otherwise by Union or national law;

(i) complete and detailed description of the process and rationale behind the training, testing and validation of the AI system is kept together with the testing results as part of the technical documentation in Annex IV;

(j) a short summary of the AI project developed in the sandbox, its objectives and expected results published on the website of the competent authorities. This obligation shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities.

1a. For the purpose of prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security, under the control and responsibility of law enforcement authorities, the processing of personal data in AI regulatory sandboxes shall be based on a specific Member State or Union law and subject to the same cumulative conditions as referred to in paragraph 1.

2. Paragraph 1 is without prejudice to Union or Member States legislation excluding processing for other purposes than those explicitly mentioned in that legislation, as well as to Union or Member States laws laying down the basis for the processing of personal data which is necessary for the purpose of developing, testing and training of innovative AI systems or any other legal basis, in compliance with Union law on the protection of personal data.

Several data protection authorities have already entered the fray and begun the process of forming opinions around some of these items. The Bavarian Data Protection Authority, for instance, has issued a data protection checklist for AI.[52] The UK Information Commissioner’s Office (ICO) in 2023 warned business that they must consider and mitigate data protection risks before adopting generative AI technology and signalled that failure to do so would result in ICO intervention. They issued eight questions that should inform this discussion:

1. What is the lawful basis for processing personal data? 

2. Is the business a controller, joint controller or a processor? 

3. Has the business prepared a Data Protection Impact Assessment (DPIA)? 

4. How will the business ensure transparency? 

5. How will the business mitigate security risks? 

6. How will the business limit unnecessary processing? 

7. How will the business comply with individual rights requests? 

8. Will the business use generative AI to make solely automated decisions?

The ICO also made a number of updates to its AI guidance[53] after requests from UK industry to clarify requirements for fairness in AI. The updates are said to “support the UK government’s vision of a pro-innovation approach to AI regulation and more specifically its intention to embed considerations of fairness into AI.”[54] The guidance contains information on ensuring transparency in AI; how to ensure lawfulness in AI; information on accuracy and statistical accuracy; and fairness in AI.[55]

Ensuring Transparency

You need to be transparent about how you process personal data in an AI system, to comply with the principle of transparency.

Before you begin your processing, you must consider your transparency obligations towards individuals whose personal data you plan to process. The core issues about AI and the transparency principle are addressed in ‘Explaining decisions made with AI’ guidance, so are not discussed in detail here

At a high level, you need to include the following in the privacy information:

If you collect data directly from individuals, you must provide that privacy information to them at the time you collect it, before you use it to train a model or apply that model on those individuals. If you collect it from other sources, you must provide this information within a reasonable period and no later than one month, or even earlier if you contact that person or disclose that data to someone else. (…)”[56]

Ensuring Lawfulness

“The development and deployment of AI systems involve processing personal data in different ways for different purposes. You must break down and separate each distinct processing operation, and identify the purpose and an appropriate lawful basis for each one, in order to comply with the principle of lawfulness.

Whenever you are processing personal data – whether to train a new AI system, or make predictions using an existing one – you must have an appropriate lawful basis to do so.

Different lawful bases may apply depending on your particular circumstances. However, some lawful bases may be more likely to be appropriate for the training and / or deployment of AI than others. In some cases, more than one lawful basis may be appropriate.

At the same time, you must remember that:

Accuracy and Statistical Accuracy in AI

“What is the difference between accuracy in data protection law and ‘statistical accuracy’ in AI?

It is important to note that the word ‘accuracy’ has a different meaning in the contexts of data protection and AI. Accuracy in data protection is one of the fundamental principles, requiring you to ensure that personal data is accurate and, where necessary, kept up to date. It requires you to take all reasonable steps to make sure the personal data you process is not ‘incorrect or misleading as to any matter of fact’ and, where necessary, is corrected or deleted without undue delay.

Broadly, accuracy in AI (and, more generally, in statistical modelling) refers to how often an AI system guesses the correct answer, measured against correctly labelled test data. The test data is usually separated from the training data prior to training, or drawn from a different source (or both). In many contexts, the answers the AI system provides will be personal data. For example, an AI system might infer someone’s demographic information or their interests from their behaviour on a social network.

So, for clarity, in this guidance, we use the terms:

‘accuracy’ to refer to the accuracy principle of data protection law; and

‘statistical accuracy’ to refer to the accuracy of an AI system itself.

Fairness, in a data protection context, generally means that you should handle personal data in ways that people would reasonably expect and not use it in ways that have unjustified adverse effects on them. Improving the ‘statistical accuracy’ of your AI system’s outputs is one of your considerations to ensure compliance with the fairness principle.

Data protection’s accuracy principle applies to all personal data, whether it is information about an individual used as an input to an AI system, or an output of the system. However, this does not mean that an AI system needs to be 100% statistically accurate to comply with the accuracy principle.

In many cases, the outputs of an AI system are not intended to be treated as factual information about the individual. Instead, they are intended to represent a statistically informed guess as to something which may be true about the individual now or in the future. To avoid such personal data being misinterpreted as factual, you should ensure that your records indicate that they are statistically informed guesses rather than facts. Your records should also include information about the provenance of the data and the AI system used to generate the inference.

You should also record if it becomes clear that the inference was based on inaccurate data, or the AI system used to generate it is statistically flawed in a way which may have affected the quality of the inference.

Similarly, if the processing of the incorrect inference may have an impact on them, an individual may request the inclusion of additional information in their record countering the incorrect inference. This helps ensure that any decisions taken on the basis of the potentially incorrect inference are informed by any evidence that it may be wrong.

The UK GDPR mentions statistical accuracy in the context of profiling and automated decision-making at Recital 71. This states organisations should put in place ‘appropriate mathematical and statistical procedures’ for the profiling of individuals as part of their technical measures. You should ensure any factors that may result in inaccuracies in personal data are corrected and the risk of errors is minimised.

If you use an AI system to make inferences about people, you need to ensure that the system is sufficiently statistically accurate for your purposes. This does not mean that every inference has to be correct, but you do need to factor in the possibility of them being incorrect and the impact this may have on any decisions that you may take on the basis of them. Failure to do this could mean that your processing is not compliant with the fairness principle. It may also impact on your compliance with the data minimisation principle, as personal data, which includes inferences, must be adequate and relevant for your purpose.

Your AI system therefore needs to be sufficiently statistically accurate to ensure that any personal data generated by it is processed lawfully and fairly.

However, overall statistical accuracy is not a particularly useful measure, and usually needs to be broken down into different measures. It is important to measure and prioritise the right ones. If you are in a compliance role and are unsure what these terms mean, you should consult colleagues in the relevant technical roles.”[58]

Fairness in AI:

“Fairness is a key principle of data protection and an overarching obligation when you process personal data. You must use personal data fairly to comply with various sections of the legislation, including Article 5(1)(a) of the UK GDPR, Section 2(1)(a) of the Data Protection Act (2018), as well as Part 3 and Part 4 of the legislation.

In simple terms, fairness means you should only process personal data in ways that people would reasonably expect and not use it in any way that could have unjustified adverse effects on them. You should not process personal data in ways that are unduly detrimental, unexpected or misleading to the individuals concerned.

If you use an AI system to infer data about people, you need to ensure that the system is sufficiently statistically accurate and avoids discrimination. This is in addition to considering the impact of individuals’ reasonable expectations for this processing to be fair.

Any processing of personal data using AI that leads to unjust discrimination between people, will violate the fairness principle. This is because data protection aims to protect individuals’ rights and freedoms with regard to the processing of their personal data, not just their information rights. This includes the right to privacy but also the right to non-discrimination. The principle of fairness appears across data protection law, both explicitly and implicitly. More specifically, fairness relates to:

Cybersecurity

Cybersecurity, or secure technological environments, is a hazard for any online entity – including LLMs. Their training on vast amounts of data leaves them vulnerable to a range of attack-options including a data poisoning attack, or adversarial attack, which can alter its outputs. Attacks can originate from private, nefarious, or, it has been argued, state-sponsored, actors. This comes against the backdrop of an announcement that experts could “hack” into LLM models in “about 30 minutes”.[60] Counsel Magazine mentions the issue of security within the context of a barrister’s law practice highlighting a so-called sophisticated spear-phishing attack as a particular vulnerability.[61]

Our starting point, on protection, at least in the European Union, is the EU AI Act – looked at in more detail later in this book. 

Recital 60r of that Regulation states:

“Providers of general purpose AI models with systemic risks should assess and mitigate possible systemic risks. If, despite efforts to identify and prevent risks related to a general-purpose AI model that may present systemic risks, the development or use of the model causes a serious incident, the general purpose AI model provider should without undue delay keep track of the incident and report any relevant information and possible corrective measures to the Commission and national competent authorities. Furthermore, providers should ensure an adequate level of cybersecurity protection for the model and its physical infrastructure, if appropriate, along the entire model lifecycle. Cybersecurity protection related to systemic risks associated with malicious use of or attacks should duly consider accidental model leakage, unsanctioned releases, circumvention of safety measures, and defence against cyberattacks, unauthorised access or model theft. This protection could be facilitated by securing model weights, algorithms, servers, and datasets, such as through operational security measures for information security, specific cybersecurity policies, adequate technical and established solutions, and cyber and physical access controls, appropriate to the relevant circumstances and the risks involved.

Article 15 of the Regulation states:

“1. High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and perform consistently in those respects throughout their lifecycle.

1a. To address the technical aspects of how to measure the appropriate levels of accuracy and robustness set out in paragraph 1 of this Article and any other relevant performance metrics, the Commission shall, in cooperation with relevant stakeholder and organisations such as metrology and benchmarking authorities, encourage as appropriate, the development of benchmarks and measurement methodologies.

2. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use.

3. High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken towards this regard. The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans. High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (‘feedback loops’) are duly addressed with appropriate mitigation measures.

4. High-risk AI systems shall be resilient as regards to attempts by unauthorised third parties to alter their use, outputs or performance by exploiting the system vulnerabilities. The technical solutions aimed at ensuring the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks. The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training dataset (‘data poisoning’), or pre-trained components used in training (‘model poisoning’) , inputs designed to cause the model to make a mistake (‘adversarial examples’ or ‘model evasion’), confidentiality attacks or model flaws.

In total there are some 23 provisions in the Regulation, articles or recitals, that mention cybersecurity. In March 2024 the European Parliament adopted plans to boost security of digital products in the EU from cyber threats pursuant to a Cyber Resilience Act.[62] The aim of the Regulation is to ensure that products with digital features are secure to use, resilient against cyber threats and provide enough information about their security properties. Important and critical products will be put into lists, based on their level of criticality, and the level of risk they present. These lists will be kept updated by the European Commission. The European Union Agency for Cybersecurity (ENISA),[63] founded in 20024, will be more closely involved when vulnerabilities are found and incidents occur. On the subject of Artificial Intelligence that Agency says as follows:

“Artificial Intelligence (AI) is an emerging concept facilitating intelligent and automated decision-making and is thus becoming a prerequisite for the deployment of IoT and Industry 4.0 scenarios as well as other application areas. While it is undoubtedly beneficial, one should not ignore the fact that AI and its application to automated decision-making – especially in deployments where safety is critical such as in autonomous vehicles – might open new avenues in manipulation and attack methods, while creating new challenges to privacy.

When considering security in the context of AI, the duality of this interplay needs to be highlighted. On the one hand, one needs to consider that AI can be exploited to manipulate expected outcomes, but on the other hand AI techniques can be used to support security operations and even to decrease adversarial attacks. Before considering using AI as a tool to support cybersecurity, it is essential to understand what needs to be secured and to develop specific security measures to ensure that AI itself is secure and trustworthy.”[64]

The Regulation initiated in September 2022 when the Commission presented a legislative proposal for the EU Cyber-Resilience Act (CRA) which introduced mandatory requirements for products with digital elements. The proposal covered a range of devices including products that are connected directly or indirectly to a device or network, including hardware, software and ancillary services. This was followed by the European Economic and Social Committee adoption of its opinion in December 2022. In the European Parliament the file had been assigned to the Committee on Industry, Research and Energy with opinions requested of both the Committees on Internal Market and Consumer Protection and on Civil Liberties, Justice and Home Affairs.[65]  In 2023 in Coreper, the Council reached a common position and co-legislator’s met in trilogue and reached provisional agreement on the text at the end of November 2023. 

The integration of AI in any of the items listed in the specified list in Annex III of the Regulation (e.g. password managers, Secure cryptoprocessors) means the AI model is automatically subjected to the enhanced cybersecurity protocols in the Regulation. Critical systems, as designated by the Commission, are a list of products that are integral to cybersecurity infrastructure such as hardware devices with security boxes, smart meter gateways and smart cards. Again, integration of AI-systems into these items will result in obligations under the more stringent cybersecurity  requirements under the Regulation.  Member States can place the most stringent obligations on products concerning national security and defence. The Regulation does not explicitly mention AI or LLMs, probably because the original text version of the Regulation did not contemplate such systems. One source says that:

“Adapting the [Regulation] to explicitly include Generative AI should be relatively straightforward. The [EU AI Act] has already laid down a risk-tiered classification and specific regulations for Generative AI (ie GPAI). This pre-existing framework offers a clear pathway for incorporating Generative AI into the [Regulation], potentially through the European Commission’s delegated acts. Such integration would enhance the [Regulation’s] effectiveness in governing AI technologies and align it more closely with the evolving landscape of AI and its potential risks, thereby reinforcing the EU’s commitment to a comprehensive and harmonized legal framework for AI regulation.”[66]  

The authors are of the view that general-purpose AI systems should be included under the more stringent category of cybersecurity requirements pursuant to Annex III of the Regulation.[67] The Council has confirmed that AI systems considered at high risk of causing harm will comply with the AI Act’s cybersecurity requirements if they respect the essential requirements listed in the Cyber Resilience Act and demonstrate that with an EU declaration of conformity.[68]

The EU’s Network and Information Systems Directive (NIS2) Directive[69] should also be noted. This Directive is the EU-wide legislation on cybersecurity and provides measures to boost overall levels of cybersecurity in the EU. 

The European Commission states:

“The Directive on measures for a high common level of cybersecurity across the Union provides legal measures to boost the overall level of cybersecurity in the EU by ensuring:

Digital Services

The EU Digital Services Act (DSA)[71] applies in full since 17th February 2024. While as an EU Regulation it has direct legal effect in EU Member States and consequently its provisions and obligations apply directly to online Intermediary Services Providers (ISPs), the Department of Enterprise, Trade and Employment, say it was necessary to have national legislation to implement those provisions of the EU Regulation that provide for its supervision and enforcement. Consequently the Digital Services Act 2024 was signed by the President in February 2024.[72]

The DSA (EU) places different obligations on different online entities to match their size, role and impact in the online ecosystem. Very large platforms and search engines, those reaching more than 10% of 450 million consumers in the EU, pose particular risks in the dissemination of illegal content and societal harm.

Recital 76 states:

“Very large online platforms and very large online search engines may cause societal risks, different in scope and impact from those caused by smaller platforms. Providers of such very large online platforms and of very large online search engines should therefore bear the highest standard of due diligence obligations, proportionate to their societal impact. Once the number of active recipients of an online platform or of active recipients of an online search engine, calculated as an average over a period of six months, reaches a significant share of the Union population, the systemic risks the online platform or online search engine poses may have a disproportionate impact in the Union. Such significant reach should be considered to exist where such number exceeds an operational threshold set at 45 million, that is, a number equivalent to 10 % of the Union population. This operational threshold should be kept up to date and therefore the Commission should be empowered to supplement the provisions of this Regulation by adopting delegated acts, where necessary.”[73]

Recital 12 states:

“In order to achieve the objective of ensuring a safe, predictable and trustworthy online environment, for the purpose of this Regulation the concept of ‘illegal content’ should broadly reflect the existing rules in the offline environment. In particular, the concept of ‘illegal content’ should be defined broadly to cover information relating to illegal content, products, services and activities. In particular, that concept should be understood to refer to information, irrespective of its form, that under the applicable law is either itself illegal, such as illegal hate speech or terrorist content and unlawful discriminatory content, or that the applicable rules render illegal in view of the fact that it relates to illegal activities. Illustrative examples include the sharing of images depicting child sexual abuse, the unlawful non-consensual sharing of private images, online stalking, the sale of non-compliant or counterfeit products, the sale of products or the provision of services in infringement of consumer protection law,[74] the non-authorised use of copyright protected material, the illegal offer of accommodation services or the illegal sale of live animals. In contrast, an eyewitness video of a potential crime should not be considered to constitute illegal content, merely because it depicts an illegal act, where recording or disseminating such a video to the public is not illegal under national or Union law. In this regard, it is immaterial whether the illegality of the information or activity results from Union law or from national law that is in compliance with Union law and what the precise nature or subject matter is of the law in question.”[75]

Large online platforms are obligated under the new rules to meet obligations in respect of: counteracting circulation of illegal goods, services or online content;[76] traceability of business users;[77] safeguards for users, transparency measures,[78] and risk assessments.[79] Codes of conduct[80] and technical standards[81] will assist platforms in their compliance with the new rules.[82] Importantly, in terms of liability, platforms are not liable for users’ unlawful behaviour unless they are aware of illegal acts and fail to remove them.[83]

Exemptions under the liability regime of DSA exist for: (i) mere conduits (Article 4), (ii) providers of caching services (Article 5) and (iii) host providers with no “actual knowledge” of illegal activity, or, otherwise, who acted expeditiously upon becoming aware (Article 6).[84]

Article 6 states:

“1.   Where an information society service is provided that consists of the storage of information provided by a recipient of the service, the service provider shall not be liable for the information stored at the request of a recipient of the service, on condition that the provider:

(a) does not have actual knowledge of illegal activity or illegal content and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or illegal content is apparent; or
(b) upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the illegal content.

2.   Paragraph 1 shall not apply where the recipient of the service is acting under the authority or the control of the provider.

3.   Paragraph 1 shall not apply with respect to the liability under consumer protection law of online platforms that allow consumers to conclude distance contracts with traders, where such an online platform presents the specific item of information or otherwise enables the specific transaction at issue in a way that would lead an average consumer to believe that the information, or the product or service that is the object of the transaction, is provided either by the online platform itself or by a recipient of the service who is acting under its authority or control.

4.   This Article shall not affect the possibility for a judicial or administrative authority, in accordance with a Member State’s legal system, to require the service provider to terminate or prevent an infringement.”[85]

One source notes that:

“EU legislation lacks specific regulations for misinformation created by Generative AI. As LLMs become increasingly integrated into online platforms, expanding the Digital Services Act (DSA) to include them, and mandating online platforms to prevent misinformation, seems the most feasible approach.”[86]

Digital Markets Act

This Act came into force in March 2024 and put strict requirements on major tech providers who have been designated as “gatekeepers” including companies Apple, Amazon, Meta and Microsoft. These gatekeepers are subject to a number of obligations aimed at facilitating an open online market. One of the key aspects of this is the requirement to ensure the interoperability of messenger systems, by allowing end users to install third party apps or app stores that interact with the gatekeepers own operating system. These interoperability provisions have raised concerns for gatekeepers from a security and privacy perspective. The matter most recently came to a head when Apple announced Artificial Intelligence features on its latest iPhone 16 would not be available in the EU upon roll-out elsewhere. In response Executive Vice President Margrethe Vestager (EU) noted that:

“I find that very interesting that they say we will now deploy AI where we’re not obliged to enable competition. I think that is the most stunning, open declaration that they know 100% that this is another way of disabling competition where they have a stronghold already”.[87]

Artificial Intelligence Image Manipulation

The New York Times in a feature[88] in April 2024 presented a pretty bleak picture of a phenomenon described in the correspondence as AI image manipulation which was, said the quoted correspondence, happening in schools all over America. Put simply the practice is where a legitimate photo is manipulated by AI using a nudification app in order to create a “deepfake”[89] or “deepnude” picture of the individual with exposed body parts. The newspaper article quoted several different incidents of the practice in different schools where the deepnude image was generated and then circulated among the school community. One of the schools in question, in an email to parents, said this:

“We want to make it unequivocally clear that this behaviour is unacceptable and does not reflect the values of our school community. Although we are aware of similar situations occurring all over the nation, we must act now. This behaviour rises to a level that requires the entire community to work in partnership to ensure it stops immediately. Artificial Intelligence (AI) image generation is a technology that uses machine learning algorithms to create or manipulate digital images. In this context, it has been used inappropriately to create images that are not only unethical but deeply concerning. (…) While the law is still catching up with the rapid advancement of technology and such acts may not yet be classified as a crime, we are working closely with the Beverley Hills Police Department throughout this investigation.”[90]

The article indicated that at least one of the incidents was proceeding by way of civil action in the courts and the article also noted that a call had been made for legislative action on the issue.[91] The FBI posted a warning that child sexual abuse material (CSAM) created with content manipulation technologies, to include generative artificial intelligence (AI), is illegal.[92]

The United States of America Congress introduced a Bill to deal with the fall-out from deepfake technology including with respect to protecting national security and to provide legal recourse to victims of harmful deepfakes.[93] The Bill defines an advanced technological false personation record as:

“(1) ADVANCED TECHNOLOGICAL FALSE PERSONATION RECORD.—The term ‘advanced technological false personation record’ means any deepfake, which—

“(A) a reasonable person, having considered the visual or audio qualities of the record and the nature of the distribution channel in which the record appears, would believe accurately exhibits—

“(i) any material activity of a living person which such living person did not in fact undertake; or

“(ii) any material activity of a deceased person which such deceased person did not in fact undertake, and the exhibition of which is substantially likely to either further a criminal act or result in improper interference in an official proceeding, a public policy debate, or an election; and

“(B) was produced without the consent of such living person, or in the case of a deceased person, such person or the heirs thereof.

Deepfake is defined as follows:

“(3) DEEPFAKE.—The term ‘deepfake’ means any video recording, motion-picture film, sound recording, electronic image, or photograph, or any technological representation of speech or conduct substantially derivative thereof—

“(A) which appears to authentically depict any speech or conduct of a person who did not in fact engage in such speech or conduct; and

“(B) the production of which was substantially dependent upon technical means, rather than the ability of another person to physically or verbally impersonate such person.

Section §1041 creates an offence and applies to any person who “produces an advanced technological false personation record with the intent to distribute such record over the internet or knowledge that such record shall be so distributed” and who fails to comply with obligations set down in the Bill including content provenance requirements; audiovisual disclosure (“not less than 1 clearly articulated verbal statement that identifies the record as containing altered audio and visual elements”); and visual disclosure – “an unobscured written statement in clearly readable text appearing at the bottom of the image throughout the duration of the visual element that identifies the record as containing altered visual elements”. The Bill also provides for victim assistance:

§ 1042. Deepfakes victim assistance

“(a) Coordinator For Violations Directed By Foreign Nation-States.—The Attorney General shall designate a coordinator in each United States Attorney’s Office to receive reports from the public regarding potential violations of section 1041 relating to deepfake depictions produced or distributed by any foreign nation-state, or any agent acting on its behalf, and coordinate prosecutions for any such violation.

“(b) Coordinator For False Intimate Depictions.—The Attorney General shall designate a coordinator in each United States Attorney’s Office to receive reports from the public regarding potential violations of section 1041 relating to deepfake depictions of an intimate and sexual nature, and coordinate prosecutions for any such violation.

The United Kingdom introduced its intention to create a new law on deepfake technology in April 2024.[94] The Bill will create a new offence of making a sexually explicit ‘deepfake’ image. Search-engine giant Google announced new features to tackle deepfake pornography including a more rapid response in removing offending images.[95] California was among States to pass laws regulating deepfakes[96] including a provision that makes it mandatory to disclose that content has been altered – known as law AB 2355.[97] That state followed several that have enacted measures around deepfake technology in 2024 including Washington, Minnesota, Oregon, Utah, Colorado, New York, Arizona and New Mexico.[98] In Ireland the newly appointed AI Advisory Council advocated for criminalisation of Deep Fake images in a 2025 report sent to Government.[1] In 2026 Grok – the AI tool for the social media platform X – encountered massive pushback globally over its sexualisation of images fed to it as part of user requests. This pushed the issue of deepfakes to the forefront. Denmark presented a proposal to extend copyright law to include a person’s likeness, face and voice.[1]


[1] https://schjodt.com/news/owning-the-self-denmarks-copyright-turn-against-deepfakes


[1] https://enterprise.gov.ie/en/publications/publication-files/ai-advisory-council-recommendations-helping-to-shape-irelands-ai-future.pdf

Conclusion

This chapter has considered the developments in the field of Artificial Intelligence as they interact with the protective governance structure of Data Protection. The chapter has outlined the dispute in Italy, the first of its kind in the world, and measures taken by Artificial Intelligence company Open AI to comply with the findings of the Italian Data Protection Authority. The chapter has also considered the legal issues arising more generally including the legal basis for training of an LLM and on outputs based on user inputs; information requirements which fall to be considered as well we model inversion, data leakage and the right to erasure; the issue of automated decision-making was also addressed as well as adequate protection for minors and purpose limitation and data minimization. Recent developments in this space also include a move by data protection authorities in Ireland, Australia, Korea, France and the United Kingdom to implement “data governance that promotes innovative and privacy-protecting AI.”[1]


[1] The declaration was signed in Paris, at an OECD hosted event organised by the Commission nationale de l’informatique et des libertés (CNIL) and the Data Protection Authority of South Korea. (https://www.dataprotection.ie/en/news-media/latest-news/data-protection-authorities-sign-joint-declaration-ai)

The chapter has also considered the related area of cybersecurity and has looked at the Cyber Resilience Act as well as relevant provisions in the EU AI Act. Digital Services were also briefly considered and the obligations on different online entities to match their size, role and impact in the online ecosystem pursuant to the EU Digital Services Act and the Digital Services Act 2024. Finally, the frontier of AI image manipulation was outlined where horrible stories are emerging from schools in the United States of America of deepnude photographs and concomitant circulation among a school body.

At present authorities are still getting to grips with the issues raised in this fast-moving sector. Guidance from the European Data Protection Board[99] has been followed by a UK consultation on generative AI and Data Protection.[100] CNIL (France) also published recommendations on AI systems in 2024.[101]


[1] Opinion of Advocate General Pitruzzella in Case C-817/19 Ligue des droits humains v Conseil des ministers, https://curia.europa.eu/juris/document/document.jsf?text=Artificial%2BIntelligence&docid=252841&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=9209601#ctx1

[2] ibid

[3] https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9870847#english

[4] https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9870847#english

[5] Ibid.

[6] https://www.dataprotectionreport.com/2023/04/italian-garante-bans-chat-gpt-from-processing-personal-data-of-italian-data-subjects/

[7] https://www.dataprotectionreport.com/2023/04/italian-garante-bans-chat-gpt-from-processing-personal-data-of-italian-data-subjects/

[8] https://www.dataprotectionreport.com/2023/04/italian-garante-bans-chat-gpt-from-processing-personal-data-of-italian-data-subjects/

[9] https://apnews.com/article/chatgpt-openai-data-privacy-italy-b9ab3d12f2b2cfe493237fd2b9675e21#

[10] https://apnews.com/article/chatgpt-openai-data-privacy-italy-b9ab3d12f2b2cfe493237fd2b9675e21#

[11] https://apnews.com/article/chatgpt-openai-data-privacy-italy-b9ab3d12f2b2cfe493237fd2b9675e21#

[12] Novelli, Claudio and Casolari, Federico and Hacker, Philipp and Spedicato, Giorgio and Floridi, Luciano, Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity (January 14, 2024). Available at SSRN: https://ssrn.com/abstract=4694565 or http://dx.doi.org/10.2139/ssrn.4694565 at 7 to 8.

[13] https://lukaszolejnik.com/stuff/OpenAI_GDPR_Complaint_LO.pdf?ref=blog.lukaszolejnik.com see also the article Whitcroft, There is debate over the extent to which the law addresses the risks arising from the use of AI, PC Pro February 2024, pp 116 – 117.

[14] https://noyb.eu/en/chatgpt-provides-false-information-about-people-and-openai-cant-correct-it

[15] See Luciano, Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity (January 14, 2024). Available at SSRN: https://ssrn.com/abstract=4694565 or http://dx.doi.org/10.2139/ssrn.4694565; and Marvin Bartels, A Balancing Act: Data Protection Compliance of Artificial Intelligence, GRUR International, 2024;, ikae060, https://doi.org/10.1093/grurint/ikae060

[16] Marvin Bartels, A Balancing Act: Data Protection Compliance of Artificial Intelligence, GRUR International, 2024;, ikae060, https://doi.org/10.1093/grurint/ikae060

[17] Marvin Bartels, A Balancing Act: Data Protection Compliance of Artificial Intelligence, GRUR International, 2024;, ikae060, https://doi.org/10.1093/grurint/ikae060 at p 1-2.

[18] Taner Kuru, Lawfulness of the mass processing of publicly accessible online data to train large language models, International Data Privacy Law, 2024;, ipae013, https://doi.org/10.1093/idpl/ipae013

[19] Citing Gil Gonzalez and de Hert Understanding the Legal Provisions That Allow Processing and Profiling of Personal Data – an Analysis of GDPR Provisions and Principles, ERA Forum 2019(4) 597 to 621 https://doi.org/10.1007/s12027-018-0546-z; Peloquin, DiMaio, Nierer, Barnes, Disruptive and Avoidable: GDPR Challenges to Secondary Research Uses of Data, European Journal of Human Genetics 2020 28 (6) 697-705 https://doi.org/10.1038/s41431-020-0596-x; Donnelly and McDonagh Health Research, Consent, and the GDPR Exemption, European Journal of Health Law 2019 26(2) 97 – 119 https://doi.org/10.1163/15718093-12262427

[20] Novelli, Claudio and Casolari, Federico and Hacker, Philipp and Spedicato, Giorgio and Floridi, Luciano, Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity (January 14, 2024). Available at SSRN: https://ssrn.com/abstract=4694565 or http://dx.doi.org/10.2139/ssrn.4694565 at p. 9.

[21] Marvin Bartels, A Balancing Act: Data Protection Compliance of Artificial Intelligence, GRUR International, 2024;, ikae060, https://doi.org/10.1093/grurint/ikae060 at p. 6.

[22] Taner Kuru, Lawfulness of the mass processing of publicly accessible online data to train large language models, International Data Privacy Law, 2024;, ipae013, https://doi.org/10.1093/idpl/ipae013 at p. 6.

[23] Taner Kuru, Lawfulness of the mass processing of publicly accessible online data to train large language models, International Data Privacy Law, 2024;, ipae013, https://doi.org/10.1093/idpl/ipae013at p. 6 citing Chat GPT (no 25).

[24] Ibid at p. 8 citing Knibbs, ‘Here’s the proof you can train an AI model without slurping copyrighted content’, Wired, 20th March 2024. 

[25] Ibid at p. 8. 

[26] Kuru considers that any assessment should be carried out using this Article – See Taner Kuru, Lawfulness of the mass processing of publicly accessible online data to train large language models, International Data Privacy Law, 2024;, ipae013, https://doi.org/10.1093/idpl/ipae013 at p. 10. 

[27] Marvin Bartels, A Balancing Act: Data Protection Compliance of Artificial Intelligence, GRUR International, 2024;, ikae060, https://doi.org/10.1093/grurint/ikae060 at p.10.

[28] Recitals 159 and 162 GDPR.

[29] Hacker, Engel, Mauer, Regulating ChatGPT and other Large Generative AI Models}, 2023, Association for Computing Machinery, https://doi.org/10.1145/3593013.3594067

[30] Taner Kuru, Lawfulness of the mass processing of publicly accessible online data to train large language models, International Data Privacy Law, 2024;, ipae013, https://doi.org/10.1093/idpl/ipae013 at p. 11

[31] Marvin Bartels, A Balancing Act: Data Protection Compliance of Artificial Intelligence, GRUR International, 2024;, ikae060, https://doi.org/10.1093/grurint/ikae060 at p. 8.

[32] Taner Kuru, Lawfulness of the mass processing of publicly accessible online data to train large language models, International Data Privacy Law, 2024;, ipae013, https://doi.org/10.1093/idpl/ipae013 at p. 11.

[33] CJEU – C-252/21 cited at Taner Kuru, Lawfulness of the mass processing of publicly accessible online data to train large language models, International Data Privacy Law, 2024;, ipae013, https://doi.org/10.1093/idpl/ipae013 at p. 13.

[34] https://www.irishtimes.com/technology/consumer-tech/2024/06/06/facebook-wants-to-use-your-data-to-train-ai-can-you-stop-it/

[35] https://www.dataprotection.ie/en/news-media/latest-news/dpcs-engagement-meta-ai

[36] https://www.ft.com/content/3c9d4172-91c0-417a-b347-00b4a9aee892

[37] https://www.ft.com/content/1e8f5778-a592-42fd-80f6-c5daa8851a21

[38] https://www.irishtimes.com/business/2024/08/06/dpc-takes-court-action-against-twitter-over-ai-user-data-concerns/

[39] https://www.irishtimes.com/business/2024/08/08/x-stopped-using-eu-user-data-to-train-its-ai-tool-dpc-says/

[40] https://dataprotection.ie/en/news-media/press-releases/data-protection-commission-welcomes-conclusion-proceedings-relating-xs-ai-tool-grok

[41] https://www.independent.ie/business/technology/x-permanently-stops-grok-ai-from-using-eu-citizens-tweets-after-court-action-by-irish-data-watchdog/a168142842.html

[42] https://www.dataprotection.ie/en/dpc-guidance/blogs/AI-LLMs-and-Data-Protection

[43] Ibid. 

[44] Marvin Bartels, A Balancing Act: Data Protection Compliance of Artificial Intelligence, GRUR International, 2024;, ikae060, https://doi.org/10.1093/grurint/ikae060 at p. 9 to 10.

[45] Novelli, Claudio and Casolari, Federico and Hacker, Philipp and Spedicato, Giorgio and Floridi, Luciano, Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity (January 14, 2024). Available at SSRN: https://ssrn.com/abstract=4694565 or http://dx.doi.org/10.2139/ssrn.4694565 at p. 11 citing Hacker, Engel, and Mauer 2023. Technical Report 2 to 3

[46] Novelli, Claudio and Casolari, Federico and Hacker, Philipp and Spedicato, Giorgio and Floridi, Luciano, Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity (January 14, 2024). Available at SSRN: https://ssrn.com/abstract=4694565 or http://dx.doi.org/10.2139/ssrn.4694565 at p. 11.

[47] Ibid at 12.

[48] Marvin Bartels, A Balancing Act: Data Protection Compliance of Artificial Intelligence, GRUR International, 2024;, ikae060, https://doi.org/10.1093/grurint/ikae060 at p. 4

[49] Article 22(2) GDPR.

[50] Marvin Bartels, A Balancing Act: Data Protection Compliance of Artificial Intelligence, GRUR International, 2024;, ikae060, https://doi.org/10.1093/grurint/ikae060 at p. 11 to 12.

[51] Marvin Bartels, A Balancing Act: Data Protection Compliance of Artificial Intelligence, GRUR International, 2024;, ikae060, https://doi.org/10.1093/grurint/ikae060 at p. 11.

[52] https://www.lda.bayern.de/media/ki_checkliste.pdf

[53] https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/

[54] Ibid.

[55] https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/

[56] https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-transparency-in-ai/

[57] https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-lawfulness-in-ai/

[58] https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/what-do-we-need-to-know-about-accuracy-and-statistical-accuracy/

[59] https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-fairness-in-ai/

[60] https://www.ft.com/content/14a2c98b-c8d5-4e5b-a7b0-30f0a05ec432

[61] Thomas, AI: The five biggest risks for barristers, Counsel, October 2024, at p. 22

[62] https://www.europarl.europa.eu/news/en/press-room/20240308IPR18991/cyber-resilience-act-meps-adopt-plans-to-boost-security-of-digital-products

[63] https://www.enisa.europa.eu

[64] https://www.enisa.europa.eu/topics/iot-and-smart-infrastructures/artificial_intelligence

[65] https://www.europarl.europa.eu/legislative-train/carriage/european-cyber-resilience-act/report?sid=7901

[66] Novelli, Claudio and Casolari, Federico and Hacker, Philipp and Spedicato, Giorgio and Floridi, Luciano, Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity (January 14, 2024). Available at SSRN: https://ssrn.com/abstract=4694565 or http://dx.doi.org/10.2139/ssrn.4694565 at p 23.

[67] Novelli, Claudio and Casolari, Federico and Hacker, Philipp and Spedicato, Giorgio and Floridi, Luciano, Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity (January 14, 2024). Available at SSRN: https://ssrn.com/abstract=4694565 or http://dx.doi.org/10.2139/ssrn.4694565 at 24.

[68] https://www.euractiv.com/section/cybersecurity/news/eu-council-clarifies-cyber-resilience-acts-interplay-with-ai-act-product-safety/

[69] https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32022L2555

[70] https://digital-strategy.ec.europa.eu/en/policies/nis2-directive

[71] https://enterprise.gov.ie/en/what-we-do/the-business-environment/digital-single-market/eu-digital-single-market-aspects/digital-services-act/

[72] https://www.irishstatutebook.ie/eli/2024/act/2/enacted/en/pdf

[73] https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32022R2065

[74] One interesting article that deals with big data and AI issues around B2C transactions is Wagner, Eidenmüller, Down by Algorithms? Siphoning Rents, Exploiting Biases, and Shaping Preferences: Regulating the Dark Side of Personalized Transactions https://lawreview.uchicago.edu/print-archive/down-algorithms-siphoning-rents-exploiting-biases-and-shaping-preferences-regulating. Or see Wagner, Gerhard and Eidenmueller, Horst G. M., Down by Algorithms? Siphoning Rents, Exploiting Biases and Shaping Preferences – The Dark Side of Personalized Transactions (March 30, 2018). University of Chicago Law Review, Oxford Legal Studies Research Paper No. 20/2018, Available at SSRN: https://ssrn.com/abstract=3160276 or http://dx.doi.org/10.2139/ssrn.3160276

[75] Ibid.

[76] See Article 6 below.

[77] Article 30

[78] Article 42 and Article 39 and Article 24

[79] Article 34

[80] Article 45

[81] Article 44

[82] https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act/europe-fit-digital-age-new-online-rules-platforms_en

[83] See Article 4, Article 6, Article 16. 

[84]  Se generally Husovec, Martin, ‘Liability Exemptions: General Requirements’, Principles of the Digital Services Act (2024; online edn, Oxford Academic), https://doi.org/10.1093/law-ocl/9780192882455.003.0006, accessed 20 Aug. 2024.

[85] https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32022R2065

[86] https://deliverypdf.ssrn.com/delivery.php?ID=305031083029083114099088103029067109034071000010027054111070006102104101002110012102099011058111062051098122022102004108111073025070025007037005080024064030082072123068055056069028074089085030073011076085024026119005071031013118094106001123012072083029&EXT=pdf&INDEX=TRUEat p. 25.

[87] See https://www.lexology.com/library/detail.aspx?g=1e81431e-add4-4f39-bbe6-14fb6e85b4cc and https://www.wsj.com/tech/ai/apple-says-regulatory-concerns-may-prevent-rollout-of-ai-features-in-europe-0b0aaf5e

[88] https://www.nytimes.com/2024/04/08/technology/deepfake-ai-nudes-westfield-high-school.html

[89] See also Busch, Ella, and Jacob Ware. The Weaponisation of Deepfakes: Digital Deception by the Far-Right. International Centre for Counter-Terrorism, 2023. JSTOR, http://www.jstor.org/stable/resrep55429. Accessed 2 June 2024.

[90] Ibid.

[91] “Dr. Bregy, the superintendent, said schools and lawmakers needed to act quickly because the abuse of A.I. was making students feel unsafe in schools.” (https://www.nytimes.com/2024/04/08/technology/deepfake-ai-nudes-westfield-high-school.html)

[92] https://www.ic3.gov/Media/Y2024/PSA240329

[93] https://www.congress.gov/bill/118th-congress/house-bill/5586/text

[94] https://www.gov.uk/government/news/government-cracks-down-on-deepfakes-creation#:~:text=Despicable%20people%20who%20create%20sexually,today%20(16%20April%202024).&text=Under%20the%20new%20offence%2C%20those,record%20and%20an%20unlimited%20fine.

[95] https://www.ft.com/content/a2b4896b-c48c-4b00-ab1a-e8cd7f98b299

[96] https://www.nytimes.com/2024/09/17/technology/california-deepfakes-law-social-media-newsom.html?searchResultPosition=2

[97] https://www.gov.ca.gov/2024/09/17/governor-newsom-signs-bills-to-combat-deepfake-election-content/

[98] https://www.nytimes.com/2024/09/17/technology/california-deepfakes-law-social-media-newsom.html?searchResultPosition=2

[99] https://www.edpb.europa.eu/our-work-tools/our-documents/topic/artificial-intelligence_en

[100] https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-consultation-series-on-generative-ai-and-data-protection/

[101] https://www.cnil.fr/en/ai-cnil-publishes-its-first-recommendations-development-artificial-intelligence-systems

Chapter 5

Artificial Intelligence and Liability

Introduction

Questions of liability for Generative Artificial Intelligence systems fall into two principal categories: (i) liability which attaches for a breach of overarching regulatory and governance provisions in respect of Artificial intelligence usage (e.g. for a failure to comply with the EU AI Act); and (ii) liability which attaches to the outputs of a, for example, Large Language Model. This latter category is mainly concerned with defamatory content: where an LLM hallucinates and attributes incorrectly the involvement of an identifiable individual in some nefarious activity. However, liability in this respect isn’t necessarily restricted to defamatory prosecution and could, theoretically, encompass other torts such as negligent misstatement. This would depend on how courts choose to see the information provided by an LLM or another Artificial Intelligence tool – whether it is simply unpublished information which is subsequently published by a user on one-or-other platform, or, whether the provision of information by an LLM is itself publication. This issues goes to the heart of defamation but it isn’t necessarily confined to defamation. The courts will also have to distinguish between information provision in the traditional way pursuant to search-engine searches and information provision by an LLM which gives a life-like response to questions posed to it by a user. In point of fact there may not necessarily be any distinction made between the two: courts could simply see LLMs as merely providing raw data which the recipient should verify before using – in much the same way as a search engine yields a series of results. Google for instance disclaims all liability.[1] If this is the case then defamatory actions, where an LLM hallucinates and provides false information regarding an identifiable individual, would become much harder to prosecute. All the more so where disclaimers have been inserted by an LLM such as that of Google below. These are all questions the courts will have to untangle. 

Further, there are other potential liability issues which may arise with the use of Artificial Intelligence systems – aside altogether from Generative AI – including the case where a physician relies on Artificial Intelligence within the context of assistive-diagnostic AI.[2] There may be common cases of negligence arising from incorrect reliance on Artificial Intelligence systems during the course of providing professional services: malpractice lawsuits against lawyers for instance.[3]

Going even deeper, organisational workflows might yield evidence of AI systems in internal administrative regimes such as task-handling and risk-assessment. An error with either could potentially expose an organisation which hadn’t adopted adequate cross-checking regimes to catch mistakes: for instance where negative consequences followed for a third party as a result of an errant decision which had been AI-enabled. 

This chapter will begin with a consideration of defamation, which is the principal potential source of liability for Large Language Model information attribution. It will then look at the concept of “explainability”: there are issues around attribution of liability in the face of complex “black box” Artificial Intelligence systems – explainable Artificial Intelligence may be requisite to handling Artificial Intelligence liability.[4] After considering explainability the chapter will move on to consider other types of liability attributable to an AI including one type of liability for usage of Artificial Intelligence systems which does not necessary fall into the Generative-AI category – assistive-diagnostic AI in the field of medicine. It will then briefly consider another context for liability arising from consumer usage of Artificial Intelligence systems integrated into everyday products which the EU hopes to address pursuant to its proposed AI Liability Directive. The renewed Product Liability Directive is also mentioned. Both of those instruments are examined in more detail in Chapter 8 on the EU approach to Artificial Intelligence. 

The chapter will consider the concept of electronic personhood and it will also consider the issue of “quasi-autonomy” focusing on different fields where there is a cross-over between Artificial Intelligence and human oversight: self-driving vehicles, content moderation, and passenger name records. Finally, the chapter will consider the issue of future liability for Artificial General Intelligence. It will refer to a proposal by the EU to attribute liability to a robot.

Defamation

One aspect to Artificial Intelligence liability we can expect to encounter is that concerning claims for defamation or injurious or malicious falsehood. This may arise owing to the capacity of LLMs to “hallucinate” – meaning they can spew out nonsense. Issues arise as to whom[5] liability would be attributed in a case where an LLM defamed a person while hallucinating. It has also been referred to in the following context which could potentially put the spotlight on an LLM deployer:

“[E]ven if a user were directly liable for infringement, the AI company could potentially face liability under the doctrine of ‘vicarious infringement’, which applies to defendants who have ‘the right and ability to supervise the infringing activity’ and ‘a direct financial interest in such activities’”.[6]

One source considers an issue of liability attaching to the deployer may stem from the practice of red-team models, or, interventions to prevent an LLM from hallucinating –  where it might seek to mitigate problems arising from problematic speech like falsely accusing people of serious misconduct. The author asks whether such red-team behaviours actually present a liability risk for model creators and deployers.[7]

The first case of its kind in this respect in the world is that of Walters v. OpenAI.[8] This was a case taken against Open AI in the United States of America in June 2023 when a radio presenter, Mark Walters, host of Armed America Radio, claimed that ChatGPT produced the text of a made-up legal complaint accusing him of embezzling money from a gun-rights organisation. Walters said he has never been accused of embezzlement or worked for the group. In May 2025 the Superior Court of Gwinnett County, Georgia granted a motion by Open AI to dismiss.[1]


[1] https://www.bfvlaw.com/georgia-court-dismisses-defamation-claim-against-openai-a-win-for-ai-developers-and-legal-clarity-in-defamation-defense/

OpenAI, in a motion, had noted that “when the reporter asked ChatGPT to summarize the complaint, ChatGPT responded several times with disclaimers, including that ChatGPT could not access the underlying document and that the reporter needed to consult a lawyer to receive “accurate and reliable information” about it.”[10]

It was also noted by one source[11] that several potential legal defences to defamation claims about LLM-generated outputs are available in the United States including:

The Walters case was thrown out in May 2025 according to The New York Times[1] who also reported that at least six similar cases have been filed in the last two years in the USA.[2]


[1] https://www.nytimes.com/2025/11/12/business/media/ai-defamation-libel-slander.html

[2] Ibid.

In Ireland the issue has already arisen too. The Irish Times reports[13] that a well-known broadcaster had initiated legal proceedings for defamation in what his lawyer described as “the new frontier of libel law”.  It was reported that an online news item mistakenly attached his image to a story about a different, unnamed broadcaster who was on trial for sexual offences. This had no connection to the broadcaster. It was suggested, by his legal team, that an automated news aggregator may have malfunctioned and been responsible for using his image in error.[14]

The case is a first in Ireland but joins several around the globe. For instance, in the United States of America Chat GPT quoted a fake newspaper article which incorrectly presented that a named law professor had sexually harassed a student, claiming this had taken place while he was a member of the faculty of a university during a class trip. Neither the faculty, or the trip, were correct – the details were entirely fabricated.Also in that jurisdiction a company filed suit against Google after its Gemini indicated the company had settled an action against it for deceptive sales practices – completely false.[1]


[1] https://www.nytimes.com/2025/11/12/business/media/ai-defamation-libel-slander.html

In Australia, it was reported, an elected mayor had been informed that Chat GPT was claiming he had spent time in prison for bribery. In fact, he was innocent, he had been a whistleblower who had uncovered international bribery associated with an Australian bank subsidiary.

Explainability and Negligent Misstatement

Law firm DLA Piper in a post[15] authored by Shea Coulson considers “arguably the most important new legal concept developed in response to the creation of highly complex artificial intelligence models trained on massive amounts of data and computer power”. The concept is called explainability, and, in essence, it is the ability to explain complex processes resulting in a decision that impacts on the individual in a way that person can understand. Explainability as a concept is present in the EU AI Act and also the draft legislation proposed by the Senate in Brazil. The idea is that explainability ensures transparency. In the words of Coulson explainability enables the rule of law because it subjects algorithm outputs to the principle of reasoned justification. This, say law makers, enables effective oversight of artificial intelligence systems. Explainability also goes to the heart of developing trustworthy AI. 

“Currently, there is commentary in the academic community that large language models express the same essential limitations as humans for certain cognitive tasks, and display similar biases such as priming, size congruity, and time delays. These biases and inaccuracies[16] are often undetectable by humans simply relying on an output generated by an AI system, particularly very complex systems built from neural-net deep learning applied to massive data sets.[17]

To solve these problems, researchers are currently exploring a number of solutions. Intelligent AI agents, for example, can be designed for the purpose of helping humans interact with and understand other artificial intelligence systems. They do so by, for example, reducing human user cognitive load, which can improve the quality of decisions and the ability of that user to understand an AI system’s outputs. Such intelligent agents can also be designed to explicitly provide explanations about why other AI systems are making certain recommendations or decisions, thereby improving human-AI collaboration and comprehension. Large language models can be used to improve the end user’s understanding of an underlying system by being engineered specifically to, for example, not just produce or draw inferences from data but also to explain in natural language how data outputs were derived. In the future these explanations will not necessarily need to take written form, but could also involve the creation of augmented reality environments designed to facilitate human understanding and improve human reasoning working with AI systems. These types of agents will be intended to improve human acceptance of AI outputs, reduce complaints and skepticism, and ultimately produce higher quality combined human-AI outputs.”[18]

One source explains helpfully:

“The use of (…) explanatory techniques would help simplify many complex problems that can occur with AI systems and autonomous decision-making, such as the problem of shared responsibility and a lack of knowledge about how AI systems make decisions and reach robust legal outcomes. Their further development and adoption should allow AI liability cases to be decided under current legal and regulatory rules, until (if it ever happens) new liability regimes for AI are enacted. One could ask if the above explanations are sufficient to determine liability in cases of loss or damage as a result of an AI decision. In our point of view, the answer is yes. Since XAI [explainability] techniques can answer what, how, and why, and, by answering these questions, we can establish the factual and legal causation (required by common law) and the causal nexus (required by the civil law), the obligations required by both legal systems to establish causation are fulfilled. Thus, courts will be able to proportionately assign liability to such failings and deal with problems of shared responsibility and a lack of knowledge about AI system decision processes.”[19]

Explainability ties into our discussion on liability by its close connection with the concept of misrepresentation. Negligent misrepresentation was first recognised in Hedley Byrne. In that case (which was ultimately settled) the court pointed obiter to the grounds by which a duty of care would arise when negligent advice was given which resulted in economic loss:

The representation in question must be untrue, inaccurate or misleading but there is no requirement that the representor engage in dishonest or fraudulent conduct – it is enough if the representor acted negligently in making the representation. 

“Claims that outputs of AI Systems fall within this category are expanding because such systems are known to produce inaccurate outputs that appear to be reliable. Users, of course, bear a certain responsibility for relying on such outputs; however, not every situation is equal and in some cases users may be led to believe that an output is reliable and then act on that output.”[20]

In one case Air Canada was held liable in 2024 for a negligent misrepresentation[21] made to a customer by one of its chatbots. The bot had provided incorrect information on bereavement fares – whether a reduced rate was available after travel. The customer relied on the information given and when he was denied a partial reimbursement subsequent to travel he challenged that decision. Air Canada admitted the information was misleading but said that during the chat the customer had been referred by the bot to a link which held the correct information. The tribunal found in favour of the customer saying that it should be obvious to Air Canada that it was responsible for all of the information on its website regardless of whether it was on a static page or via an interactive bot. 

Other cases might include instances of bias:[22] law firm Mason Hayes and Curran in a post[23] refer to a case in 2014 when Amazon had to cease using an AI recruitment tool that favoured male applicants.

“It has also been reported that candidates from ethnic minorities or with certain disabilities, such as speech impediments or neurological conditions, find it more difficult to engage with AI interview software which analyses speech patterns or facial expressions.”[24]

A typical defence to a claim for negligent misstatement would be the existence of a disclaimer or terms and conditions within a contractual setting. Explainability is considered to provide “a powerful layer of protection on top of contractual terms and conditions”. 

“Explainability not only assists a user understand the context of an output, but actually provides a set of understandable reasons or an understandable context, that allows the user to apply critical thinking and reasoning skills to appreciate what an output is and what it is not, how that output was derived, and where inferences that led to that output may have gone awry.

If, for example, a customer-facing generative AI chatbot did not simply produce outputs that appeared to be natural language human-like answers to questions, but also made accessible to the user an explanation that this output was derived from an inference model trained on a specific data set for a limited set of purposes, and then provided a basic chain of logic (even if by analogy) to show the user how their input question generated the output, then that user would likely have little basis to argue that they were provided a representation that the natural language output was definitively true and could be relied upon.” [25]  

Other Liability Types

Aside from the issue of defamation, and misrepresentation, already considered, liability infringements can arise as a result of a breach of copyright. Such issues have been looked at within the context of a standalone chapter on this topic. It’s worth noting that liability for breach of copyright could arise from the training of the Large Language Model using copyrighted materials, and, also, from the output of the model in response to user inputs where those outputs contain copyright material. This may be in circumstances where this puts the LLM in competition with the original provider of the copyrighted materials – the argument raised by The New York Times in its litigation against Open AI.

Another area of potential liability falls to be considered within the context of Data Protection. Again, this has been considered within its own standalone chapter – the issues raised include the age verification procedures of the LLM and the legal basis for its processing of personal data in circumstances where user inputs may include the personal data of others. 

Aside from these issues already considered other potential heads of liability from generative artificial intelligence include trademark infringement where outputs use the registered trademark of another without permission – such as the use of a specific brand’s logo or slogan in advertising or marketing materials.[26]

Other types of infringement include patent infringement and design infringement which could potentially bring legal liability and reputational damage on the entity or individual using the generative AI.

In an article Ding Ling considers the determination of subjective fault in tort cases involving generative AI as a “complicated and critical issue”. The author explains that subjective fault liability usually refers to infringement caused by the intentional or negligent actions of developers, users or managers, whereas, intentional infringement involves the use of AI technology to knowingly violate the legitimate rights and interests of others. The author gives the example of a company knowingly using generative AI to copy a competitors copyrighted materials as an example of wilful infringement.[27] On the other hand the example is given of a developer that does not properly review and filter data used by its AI system resulting in the system generating infringing content – a possible example of negligent liability. In practice, considers the author, the issue will turn on whether there has been put in place reasonable precautions to avoid infringement: whether there are copyright checks on the data used: effective content monitoring: and filtering mechanisms.[28]

“In these cases, the determination of liability often requires a combination of technical complexity, industry standards, legal requirements, and specific use scenarios.”[29]

There is also the issue of no-fault liability where liability is not based on intentional or negligent actions but may be based on strict liability where developers, for example, are held liable if their technology causes infringement – irrespective of whether there was negligence. Such liability attribution needs to balance technological innovation on the one hand with the protection of individual rights on the other. The proposed EU AI Liability Directive shows elements of a strict liability regime.[30] One author considers that strict liability is required in cases involving High Risk AI systems.[31]

Ding Ling also looks at the applicable standard of proof in cases involving generative AI and states:

“The standard of proof is a key link in the trial process, requiring the right holder to provide sufficient evidence to prove that his rights have been violated. This may include showing similarities between the original work and the alleged infringing content, proving ownership of a trademark or patent, and the existence of an infringement. In the case of generative AI, this may require complex technical analysis and expert evidence, such as how the algorithm works, the legitimacy of the data source, the originality of the generated content, etc. The identification of damage is also an important part of proof, and the right owner needs to show the specific loses he has suffered due to the infringement including economic losses and non-material losses.”[32]

Overall that author considers the establishment of generative AI judicial trial standards which “need to take into account the clarity of claims, the rationality of proof standards, and the validity of defence grounds (fair use, where available, for example).

“At the same time, the court needs to have the corresponding technical knowledge and professional judgement ability when dealing with these cases to ensure the fairness and accuracy of the trial.”

Assistive-diagnostic AI

Another area of interest is that related to use of assistive-diagnostic AI.[33] Rimkuté notes that AI is swiftly integrating into clinical practices within the EU and that from 2015 to 2020 alone the EU approved 224 medical AI tools with many more now emerging.[34] These tools are crafted to aid doctors in diagnosis by, for example, recognising indicators of conditions such as cancer or stroke or categorising cancerous lesions in images of the skin or assessing the likelihood of heart disease.[35]

The author explains that:

“The introduction of assistive-diagnostic AI will change the dynamics of diagnostic decision-making. Under the new model, the doctor will remain the decision maker with her diagnosis directly affecting the patient. However, assistive-diagnostic AI will influence the decision of the doctor by offering its view on diagnosis and, therefore, impact the patient’s outcome indirectly.”[36]

Ultimately this means a patient, who is harmed by an incorrect diagnosis, may find the origin of the harm stems from three different sources: from producers of the AI; from the doctors that overlooked symptoms or misjudged the AI recommendations; and from the hospital itself; resulting in exposure to medical malpractice, corporate negligence of healthcare institution and product liability. Ultimately, argues the article, the use of AI may go to the issue of the standard of care where the standard of care may, or may not, be evaluated without special consideration for the use of AI. The article highlights a divergence in views on this point.[37] As regards the standard of care in each case the author states firstly that:

“If assistive-diagnostic AI is integrated into the standard of care, which means that its use is governed by authoritative sources that constitute the medical standard of care, the algorithm for evaluating whether a doctor is at fault should follow this process: first, identifying the duties that modern science or practice imposes on doctors regarding the use of assistive-diagnostic AI; second, assessing whether the doctor has breached those duties and it has caused harm to the patient.”[38]

Contrariwise, in cases where AI does not form part of the standard of care (and this may be a matter to be determined by the Court in proceedings) the author considers that: “the general rule is that the evaluation of a doctor’s professional duties is conducted in the same manner as in conventional cases, irrespective of assistive-diagnostic AI’s prognosis. In such cases, the assessment of a doctor’s malpractice would be conducted by addressing whether the doctor’s diagnosis aligns with the standard of care (regardless of the assistive-diagnostic AI’s diagnosis) and, if not, whether it resulted in harm to the patient.”[39]

In conclusion the article points out that:

“These results suggest that, regardless of the accuracy or inaccuracy of the AI diagnosis, doctors are held liable only when they deviate from the standard of care. Whether this result is deemed satisfactory depends on the perspective. On one hand, the current view reinforces the principle that doctors are primarily accountable for adhering to established standards, providing a degree of consistency and predictability in legal outcomes. On the other hand, some may argue that the liability system should consider the accuracy of AI diagnoses as a significant factor in determining physician culpability. This could be particularly relevant in situations where AI outperforms the standard of care, raising questions about the adaptability of legal frameworks to advancements in technology.”[40]

Organisational negligence, against the healthcare provider, might arise as a result of inadequate training[41] and the obligation to employ safe equipment.[42]  Liability of the producer of the product is also possible.[43]

EU and AI Liability

While there is no sign yet of life-like robots, otherwise known as Artificial General Intelligence (“AGI”), robots with life-like responses, including to questions posed to them, we may be within 20 years of their deployment, or, on some views we may only be less than 5 years away from AGI with some even suggesting that we are already witnessing the early stages of its design. Obviously, as lawyers, we have a responsibility to regulate for adverse future outcomes. We also have a responsibility to ensure continued innovation and growth in the economy. 

With this in mind the EU has seen fit to look at the issue of Artificial Intelligence liability. We must be clear that the rules proposed are not concerned with AGI, as such, at least not expressly. Instead, what the European Commission has in mind is the provision of Artificial Intelligence systems as part of another product: where AI is part of the package purchased. 

The European Union has embarked on a process to consider issues of liability with a proposal for an AI Liability Directive[1] (“the Proposed Directive”)[2] which has been withdrawn in 2025.[3] An accompanying proposal for a new Products Liability Directive which proceeded will also be considered.  This approach had differed markedly in direction from an earlier position of ascribing electronic personality to robots[4] discussed in the section below. While we may laugh at such a proposition now, it’s possibly the robot that will have the last laugh: especially if we develop AGI as soon as some anticipate. 

Taking a different approach to its enactment of an Artificial Intelligence Act, which, in parallel with the Proposed Directive (now withdrawn), regulates horizontally the field of development of AI systems more generally in the form of a directly applicable Regulation[5] – the European Commission is sought agreement on a Proposed Directive, not a Regulation, on the specific issue of liability for Artificial Intelligence. This of course lines up with the treatment by the EU of liability for products more generally under its existing, and new, Product Liability regime, and, it’s likely this was a factor in the decision to opt for a Directive with respect to liability for Artificial Intelligence. Opting for a Directive, of course, would have permitted a degree of flexibility in respect of how each Member State chooses to implement the provisions ultimately set down by the European Union on the issue. Straight away, however, we can anticipate this would have led to variations in treatment of questions of liability among Member States.[6] The European Commission addressed the issue of its choice of instrument by stating:

“A directive is the most suitable instrument for this proposal, as it provides the desired harmonisation effect and legal certainty, while also providing the flexibility to enable Member States to embed the harmonised measures without friction into their national liability regimes. A mandatory instrument would prevent protection gaps stemming from partial or no implementation. While a non-binding instrument would be less intrusive, it is unlikely to address the identified problems in an effective manner. The implementation rate of nonbinding instruments is difficult to predict and there is insufficient indication that the persuasive effect of a recommendation would be strong enough to produce consistent adaptation of national laws.”[7]

One organisation, the Future of Life Institute, looked at more closely in the following chapter on Superintelligence, has already argued that there ought to be harmonisation of rules across all EU Member States on issues like compensable damages. That organisation, in a Position Paper states:

“The importance of immaterial harms caused by AI systems was recognised in the Commission’s 2020 White Paper on Artificial Intelligence. It specifically lists the “loss of privacy, limitations to the right of freedom of expression, human dignity, discrimination for instance in access to employment” amongst the harms. The proposed Artificial Intelligence Liability Directive allows for immaterial damages to be covered such as discrimination. However, the proposed Directive leaves it up to Member States to define through their national laws the exact types of damages that will be covered.”[8]

That organisation goes on to consider the treatment of more general consumer items that fall under the EU Product Liability regime and says that the recent proposal for a Directive in that area “does include a harmonised definition of what constitutes damage.” [9] In its press communique on the introduction of both Proposed Directives – the Artificial Intelligence Liability Directive and the New Product Liability Directive – the Commission explains that both were being introduced with a view to paving the way for future technology innovations:

“Today, the Commission adopted two proposals to adapt liability rules to the digital age, circular economy and the impact of global value chains. Firstly, it proposes to modernise the existing rules on the strict liability of manufacturers for defective products (from smart technology to pharmaceuticals). The revised rules will give businesses legal certainty so they can invest in new and innovative products and will ensure that victims can get fair compensation when defective products, including digital and refurbished products, cause harm. Secondly, the Commission proposes for the first time a targeted harmonisation of national liability rules for AI, making it easier for victims of AI-related damage to get compensation. In line with the objectives of the AI White Paper and with the Commission’s 2021 AI Act proposal, setting out a framework for excellence and trust in AI – the new rules will ensure that victims benefit from the same standards of protection when harmed by AI products or services, as they would if harm was caused under any other circumstances.”[10]

In the result, the Proposed Directive (had it not been discontinued) applies to non-contractual fault-based civil law claims for damages, in cases where the damage caused by an AI system occurs after the end of the transposition period.[11] The Proposed Directive specifically excluded any impact on rights which arise under Product Liability rules.[12]One source considers the Proposed Directive did not go far enough stating:

“The proposed easing of the burden of proof for victims of AI, through enhanced discovery rules and presumptions of causal links, is insufficient in a context where Large Language Models exhibit unpredictable behaviours and humans increasingly rely on autonomous agents for complex tasks.”[13]

The Proposed Directive cross-references the parallel Artificial Intelligence Act for definitions for terms such as “AI system”, “provider” and “user”. A duty of care is defined as: “a required standard of conduct, set by national or Union law, in order to avoid damage to legal interests recognised at national or Union law level, including life, physical integrity, property and the protection of fundamental rights.”[14]

The Proposed Directive addressed the issue of disclosure of evidence and a rebuttable presumption of non-compliance with the requirement to disclose relevant evidence and it also mandates Member States to address the issue of a requirement to raise a rebuttable presumption of a causal link in the case of fault. With respect to damage caused by AI systems, the Proposed Directive expressly states, in its Explanatory Memorandum, that it can be challenging for claimants to establish a causal link between non-compliance (with the AI Act, for example) and the output produced by the AI system that gave rise to the relevant damage.[15] For that reason, a “targeted rebuttable presumption of causality has been laid down in Article 4 (1) regarding this causal link”.[16] Such fault can be established, for example, for non-compliance with a duty of care pursuant to the AI Act.[17]

Paragraphs (2) and (3) of Article 4 differentiate between, on the one hand, claims brought against the provider of a high-risk AI system or against a person subject to the provider’s obligations under the AI Act and, on the other hand, claims brought against the user of such systems. In cases where the defendant uses the AI system in the course of a personal non-professional activity, Article 4(6) provides that the presumption of causality should only apply if the defendant has materially interfered with the conditions of the operation of the AI system.[18] The Directive shall be reviewed 5 years after the end of its transposition date.[19]

In 2022 the European Union Intellectual Property Office (EUIPO) considered in a study the potential impact of AI on infringement and enforcement of copyright and design.[20] The study shows, through several hypothetical scenarios, 20 examples of how AI can be used both as a tool for the production and sale of copyright infringing content or for the design and sale of infringing goods, as well as by the rightholders themselves in pursuing their legitimate interests- a finding depicted as a double-edged sword. With this in mind the EU has built a regulatory framework, focusing first on the AI Act before returning to consideration of the Proposed directive. The Proposed Directive will be complementary to the AI Act: the Proposed Directive will rely on the AI Act for substantive rules on AI development and deployment, and import the definition of AI, and of high-risk AI systems, from the AI Act. Marinković  explains:

“The Proposal for an AI Liability Directive introduces new rules specific to damages caused by AI systems by creating a ‘rebuttable presumption of causality’. So, if somebody is claiming damages from the provider or the user of an AI system, the claimant would not have to prove that the defendant was at fault, provided that: (i) the AI system’s output (or failure to produce an output) was reasonably likely to have caused the damage; (ii) that damage or harm was caused by some human conduct affecting the AI system’s output; and (iii) the conduct did not comply with a certain obligation relevant to the harm, ie did not meet the duty of care under EU or national law that was directly intended to protect against the damage that occurred. These general rules set in the Proposal for an AI Liability Directive in Article 4(1) are applicable in all situations where damage was caused by an AI system. For situations where such damage was caused by high-risk AI systems, there are specific provisions in place in Article 4(2) (when damages are claimed against a provider of a high-risk AI system) and in Article 4(3) (when damages are claimed against a user of a high-risk AI system). Failure to meet the duty of care under the general rule can in such cases be established if the defendant did not comply with the horizontal rules on AI systems (eg relevant obligations for ‘high-risk AI Systems’) set out in the AI Act.”[21]

Meanwhile, the New Products Liability Directive[22] lays down common rules on the liability of economic operators for damage suffered by natural persons caused by defective products and permits “compensation for damage when products like robots, drones or smart-home systems are made unsafe by software updates, AI or digital services that are needed to operate the product, as well as when manufacturers fail to address cybersecurity vulnerabilities”.[23] The word product in the proposal “includes electricity, digital manufacturing files and software”.[24]  

The EU Liability Rules for Artificial Intelligence, while they seek to generally address an overarching issue of concern, should not necessarily be applicable in every case that involves Artificial Intelligence systems – bespoke sets of rules like those which may come to apply to the user of a “self-driving” vehicle,[25] for instance – may still be required in the future. 

One recent contribution to the discussion on civil liability proposes a strict liability regime in respect of personal injury and death, and a bespoke fault-based regime for dignitary or reputational injuries. Interestingly, that contribution poses and answers the following question:

“Assuming a machine is seen as at fault, or a decision taken by AI ought to give rise to strict liability, who pays? As mentioned above, it cannot be the machine itself, an entity with neither the propensity to own assets with which to satisfy an award of damages, nor the ability spontaneously to insure itself against the possibility of having to pay them. (Nor, unlike a human or a corporate entity, can it enter into a contract of employment or agency such as will make an employer liable for acts or omissions committed in the course of employment.) How should we get around this? The best way, it is suggested, is by legislative ascription of liability, the underlying aim of which should be a rough correlation of benefit and burden (in other words, those who substantially benefit from the utilization of AI should be the ones to pay when it goes wrong).”[26]

The Bar Council, in a submission, on the proposed AI Liability Directive, states as follows:

“It is essential that liability be approached in a neutral manner, so that the law responds to an activity carried out by a human in the offline environment in the same manner as it would in the digital or online environments. Any major distinctions in approach could serve to undermine the wider liability regime and could lead to evasive or avoidance behaviours in the context of non-contractual civil litigation. Accordingly, the fault-based liability regime contemplated within [Artificial Intelligence Liability Directive] appears to set an appropriate and consistent standard for assessing liability. The success of the [Artificial Intelligence Liability Directive]  in this respect will turn on the definitions which are eventually adopted in the AI Act and in the new Product Liability Directive both of which control the types of damage and fault in respect of which liability can be imposed.”[27]


[1] See generally Guido Noto La Diega, Leonardo C T Bezerra, Can there be responsible AI without AI liability? Incentivizing generative AI safety through ex-post tort liability under the EU AI liability directive, International Journal of Law and Information Technology, Volume 32, Issue 1, 2024, eaae021, https://doi.org/10.1093/ijlit/eaae021

[2] Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) which can be accessed https://commission.europa.eu/system/files/2022-09/1_1_197605_prop_dir_ai_en.pdf

[3] https://commission.europa.eu/publications/2025-commission-work-programme-and-annexes_en

[4] European Parliament Resolution with Recommendations to the Commission on Civil Law Rules on Robotics

(2015/2103(INL)) (European Parliament, 16 February 2017), para 59(f).

[5] See above

[6] For example, it does not necessarily follow that the Proposed Directive is intended to cover questions around liability for Artificial General Intelligence (“AGI”): this is something the EU, ultimately, will need to be more explicit about for its Member States. For general overview of Artificial General Intelligence see Byrne, John P “Regulating AI” in The Bar Review, Volume 28, Number 1, February 2023 at p. 12

[7] Explanatory Memorandum to the Proposed Directive. 

[8] The Future of Life Position Paper on AI Liability available to access at https://futureoflife.org/wp-content/uploads/2022/11/FLI_AI_Liability_Position_Paper.pdf

[9] COM(2022) 495 – Proposal for a directive of the European Parliament and of the Council on liability for defective products at 4 (6) where it states: “‘damage’ means material losses resulting from: (a) death or personal injury, including medically recognised harm to psychological health; (b) harm to, or destruction of, any property, except: (i) the defective product itself; (ii) a product damaged by a defective component of that product; (iii) property used exclusively for professional purposes; (c) loss or corruption of data that is not used exclusively for professional purposes;” available to access at https://single-market-economy.ec.europa.eu/system/files/2022-09/COM_2022_495_1_EN_ACT_part1_v6.pdf

[10] https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807

[11] Article 1(2) of the Proposed Directive. The Commission states: “The purpose of the AI Liability Directive is to lay down uniform rules for access to information and alleviation of the burden of proof in relation to damages caused by AI systems, establishing broader protection for victims (be it individuals or businesses), and fostering the AI sector by increasing guarantees. It will harmonise certain rules for claims outside of the scope of the Product Liability Directive, in cases in which damage is caused due to wrongful behaviour. This covers, for example, breaches of privacy, or damages caused by safety issues. The new rules will, for instance, make it easier to obtain compensation if someone has been discriminated in a recruitment process involving AI technology” accessible at https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807

[12] Article 1(3b) of the Proposed Directive. 

[13] Guido Noto La Diega, Leonardo C T Bezerra, Can there be responsible AI without AI liability? Incentivizing generative AI safety through ex-post tort liability under the EU AI liability directive, International Journal of Law and Information Technology, Volume 32, Issue 1, 2024, eaae021, https://doi.org/10.1093/ijlit/eaae021, see abstract. The authors also state: 

[14] Article 2(9)  of the Proposed Directive. 

[15] Explanatory Memorandum to the Proposed Directive.

[16] Explanatory Memorandum to the Proposed Directive.

[17] See Ana Rački Marinković, Liability for AI-related IP infringements in the European Union, Journal of Intellectual Property Law & Practice, 2024;, jpae061, https://doi.org/10.1093/jiplp/jpae061

[18] Explanatory Memorandum to the Proposed Directive.

[19] Article 5 of the Proposed Directive. 

[20] European Union Intellectual Property Office (EUIPO), ʽStudy on the impact of artificial intelligence on the infringement and enforcement of copyright and designsʼ (2022) 64. Available at www.euipo.europa.eu/en/publications/study-on-the-impact-of-artificial-intelligence-on-the-infringement-and-enforcement-of-copyright-and-designs (accessed 10 June 2024).

[21] Ana Rački Marinković, Liability for AI-related IP infringements in the European Union, Journal of Intellectual Property Law & Practice, 2024;, jpae061, https://doi.org/10.1093/jiplp/jpae061 at p. 2

[22] Directive on Liability for Defective Products which can be accessed https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52022PC0495

[23] https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807. For example: Article 10 of the Proposed New Products Liability Directive states that no exemption from liability shall apply to the benefit of the manufacturer of the product in circumstances where “the defectiveness of the product is due to any of the following, provided that it is within the manufacturer’s control: 

(a) a related service; 

(b) software, including software updates or upgrades; or

(c) the lack of software updates or upgrades necessary to maintain safety.” 

[24] Article 4(1) of the Proposed New Products Liability Directive. 

[25] See further down this chapter

[26] See Baris Soyer, Andrew Tettenborn, Artificial intelligence and civil liability—do we need a new regime?, International Journal of Law and Information Technology, 2023;, eaad001, at p. 7 https://doi.org/10.1093/ijlit/eaad001

[27] Submission published on Linked In and available:https://www.linkedin.com/posts/thebarofireland_bar-of-ireland-submission-to-aild-activity-7026510281937678336-peEW/?originalSubdomain=ie

Electronic Personhood?

Authors Nerantzi and Sartor[70] consider the issue of AI crime: where a machine accomplishes a task previously performed by humans during the course of which it commits an “AI Crime” – i.e. engages in behaviour which would be considered a crime if it were accomplished by a human. They give the example of an advanced AI trader which autonomously manipulates markets contrary to the best efforts of its designers.[71] This, say the authors, raises a criminal responsibility gap since no agent (human or artificial) can be legitimately punished for the outcome. 

The authors define an AI agent as a machina economica and reiterate in their example of market manipulation the intended goal of the AI to achieve profit maximisation. Due to its efforts to strive to achieve this goal the AI commits a crime: and not for any evil purpose. Thus the AI is guided by its utility function, or, the standard by which it evaluates and selects its actions. If the utility function is premised on economic results the AI will seek to implement actions that bring greatest economic rewards. The AI cannot be guilty of a crime under current criminal law as it lacks legal personality and the capacities required for criminal responsibility. 

The authors detail possible approaches including corralling these outcomes within the bounds of the current criminal law for humans. On this approach the authors state the easiest way to bridge any criminal responsibility gap is to attribute liability to AI providers and to “adapt the negligence regime to encompass scenarios of ‘foreseeable unforeseeability’”.[72] The authors also consider whether criminal liability should be attributed to AI agents: ascribing criminal responsibility to AI. This, they acknowledge, potentially leads to  an overly complex discussion “mired in philosophical debates on the justification of punishment and the criteria for assigning blame for criminal conduct”.[73]

The authors introduce a concept called the ‘Deterrence Turn’ which focuses on a current deterrence gap: the legal system’s inability to provide adequate deterrence”. This would involve designing an ‘AI deterrence paradigm” – a new punitive regime separate to criminal law which would apply to an AI agent possessing a utility function and the ability to select actions according to their expected utility:

“[T]his can be achieved by modifying the machine’s expected outcomes so that, according to its very utility function, the expected utility of the lawful behaviour becomes higher than that of the unlawful behaviour.”[74]

This becomes easier, say the authors, when the AI agents are viewed as machina economica in that a financial sanction is the easiest way to disincentivise criminal enterprises. Such sanctions could be linked to criminal actions (actus rei) resulting from intentional machine behaviour. Reports of the necessity for deterrents of this type were issued in advance of studies by researchers at MIT that AI is already deceiving us:

“Talk of deceiving humans might suggest that these models have intent. They don’t. But AI models will mindlessly find workarounds to obstacles to achieve the goals that have been given to them. Sometimes these workarounds will go against users’ expectations and feel deceitful.”[75]

Authors Gless, Silverman, and Weigend also point to something similar when their overview of the area concludes that:

“Nevertheless, some researchers expect that Intelligent Agents will one day acquire the ability to engage in moral reasoning. Robots might be programmed with a system of ‘‘merits’’ and ‘‘demerits’’ for certain decisions they make, and that system could be treated as an analogue to human self-determination on moral grounds. Once that step has been taken, the attribution of criminal culpability to robots will no longer be out of the question.”[76]

In an article authors Abbott and Sarch advocate holding AI directly criminally liable where it is acting autonomously and irreducibly, [77]though,  they accept there would be challenges with implementation: 

“Mens rea, and similar challenges related to the voluntary act requirement, are only some of the practical problems to be solved in order to make AI punishment workable. For instance, there may be enforcement problems with punishing an AI on a blockchain. Such AIs might be particularly difficult to effectively combat or deactivate.

Even assuming the practical issues are resolved, punishing AI would still require major changes to criminal law. Legal personality is necessary to charge and convict an AI of a crime, and conferring legal personhood on AIs would create a whole new mode of criminal liability, much the way that corporate criminal liability constitutes a new such mode beyond individual criminal liability.”[78]

Personhood for AI still presents itself in the literature from time-to-time[79] and in an earlier contribution the European Parliament’s Legal Affairs team considered[80] the notion of electronic personhood within the context of civil liability and concluded:

“[W]e need to assess the opportunity of granting legal personality with respect to precisely defined criteria, to be observed in the specific case or, better, with respect to single classes of applications, and the peculiarities they display in terms of (i) incentives, (ii) distribution of risks, (iii) possible cooperation of multiple human agents, as well as (iv) market structures. All such elements might influence a sound analysis leading to the identification of the preferable regulatory solution. It is clear (…) that law is never technology neutral and it is not sensible to overlook technological differences and the need for specific approaches in favour of general and all-encompassing solutions.”[81]

Finally on a related point authors Dahl et al[82] consider the question whether LLMs knows the law. In a paper they found that LLMs hallucinate 58% of the time and uncritically accept users’ incorrect legal assumptions. This was particularly relevant within the context of models being used to augment legal practice, education and research – said the authors.[83]

Quasi-Automation

Wagner in a 2019 paper[84] looks at “quasi-automation” – the inclusion of humans as merely rubber-stamping an automated decision-making system: identifying such practices in three areas: self-driving cars, border searches and content moderation on social media. The author notes that while there are specific regulatory mechanisms for purely automated decision-making such regulatory mechanisms do not apply to a human rubber-stamping such decisions. This results in what the author describes as “regulatory grey areas.” The author states:

“Another challenge related to understanding the role of human agency in socio-technical systems is the assumption of binary liability. In the binary liability model, either a human or a machine must necessarily be at fault, which in turn links to a social argument about the need to blame someone  (…) However, this model of social blame translated into binary legal liability is unfit for a world of human-technical systems in which both equally contribute to decision making. The fact that humans are in the loop should not absolve automated systems—as is frequently currently the case—from being scrutinized legally.”[85]

This is all the more so, reasons the author, when regulation envisages a human in-the-loop as actually reviewing every decision, when, the reality is far different with a human-being merely rubber-stamping the decision – a process described as quasi-automation.[86]

Interesting too is the finding of Araujo et al[87] that decisions taken automatically by AI were often evaluated by the public on par or even better than human experts for specific decisions.[88]

The results of these findings should emphasise to policy-makers that, on the one hand, obligating a human-in-the-loop is not a panacea for safe deployment of an AI system, while, on the other, an AI system is actually quite good at performing decision-making on its own.

The finding of the Article 29 Working Party should be considered in this respect where it states:

“To qualify as human intervention, the controller must ensure that any oversight of the decision is meaningful, rather than just a token gesture. It should be carried out by someone who has the authority and competence to change the decision. As part of the analysis, they should consider all the available input and output data.”[89]

Self-driving vehicles

This overlap between automated AI systems and human-in-the-loop intervention can be seen most clearly in the case of self-driving vehicles: one of the areas mentioned above by Wagner and addressed by authors Gless et al.[90] Ireland is towards the forefront of research and development in this area.[91] There are six generally accepted levels of autonomous vehicle beginning with level 0 and ending with level 5. Level 0 is a vehicle which displays none of the abiding characteristics of a self-driving vehicle. It relies on the traditional model of a driver, unassisted, driving the vehicle and retaining control over the vehicle throughout the whole of the driving process. Level 1 vehicles display the embryonic beginnings of autonomous driving: basic technological enhancements such as advanced driver assistance systems (ADAS) features. For example, Autonomous Emergency Braking (AEB), assist the driver in the driving process at particular moments by reacting to an emergency situation faster than the reactions of a human. The key word is ‘assistance’: these systems are designed to assist the driver, hence the name Advanced Driver Assistance Systems, while the vehicle remains at all times strictly in the control of the driver.

Level 2 vehicles are those which display, what we can loosely describe as, more advanced ADAS systems. These vehicles are capable of self-parking for instance: a driver can exit the vehicle while the vehicle parks itself while the driver retains control of the vehicle via the key fob. These vehicles at this level are also capable of moments of self-driving on our roads – the vehicle can drive itself on the motorway, it can even change lanes, by itself. This is achieved by the use of cameras on the vehicle which can view the road and can determine the distance of other vehicles within its ambit. Under Irish law, the driver must at all times remain in control of the vehicle during these self-driving moments which the driver does by placing her hand on the steering wheel when advised to do this by the on-board systems of the vehicle. This tells the vehicle that the driver is alert and ready to take back control of the vehicle. The driver is not permitted to engage in, what will be described as, distracting activities in the vehicle while the vehicle is self-driving: such as reading, watching a film, or engaging in screen time on an electronic device. At least some of these types of behaviours have, tragically, already resulted in the fatality of drivers using a nascent form of this self-driving technology in other jurisdictions.[92] Furthermore, in respect of Level 2, there will be an inevitable hand-over where the vehicle gives control back to the human driver. Recent research has shown that drivers need to be trained in this process to avoid, quite alarming, untrained vehicle hand-over times.[93] This is one factor for Wagner when he decries that:

“Thus, it seems reasonable to argue that human drivers are not necessarily fully in the loop, simply because their presence is required in a technical system. Rather, it can be argued that the presence of drivers in self-driving cars is to assure the public that their safety is being taken care of, and that a specific person will be liable in the event of an accident. Because current research suggests that individual response times are considerable, it has to be asked how useful a driver would actually be in the case of an emergency.”[94]

Level 3 vehicles are those which display most of the basic capabilities of self-driving. The human driver remains in the driver’s seat: as with Level 2 the driver must demonstrate alertness and must take back control when advised: but at this level the vehicle is capable of self-driving across a wider range of situations than with Level 2. Readers may be surprised to learn that vehicles capable of Level 3 autonomy are already on Irish roads: under Irish law such vehicles are not permitted to drive at this level of autonomy though – in other words while they are technologically capable to do so they are not permitted by law to self-drive at this level.    

Level 4 autonomy vehicles are those which are designed to fully self-drive within certain pre-determined areas. Governments in the future could define the areas where these vehicles can operate at this level: for example, the Government in Ireland could designate the N81 from Blessington to Baltinglass as part of a network of routes which accommodate self-driving vehicles at Level 4. Readers familiar with this route will know this is a tricky stretch of road but it’s anticipated that self-driving vehicles operating at this level will be more than capable of driving roads of this type. The human driver will at all times remain in the driver’s seat, but, unlike Level 3, it’s anticipated the driver will be able to engage in those distracting tasks mentioned earlier while the vehicle drives itself. There will, as with Level 3 and Level 2, be a mechanism to take back control of the vehicle – at this level this will be required when the vehicle approaches the end of the zone in which it is permitted to self-drive. 

Finally, Level 5 indicates full autonomy: the driver will not even be required to sit in the driver’s seat at this level. Some depictions of the driver position at Level 5 show the driver sleeping in the seat to the rear of the vehicle. There are even questions around whether we can consider the driver as a driver at all, or, in fact, simply a vehicle user. Various predictions have been made around when we can anticipate Level 5 autonomous vehicles to be widely available: the year 2030 was widely predicted in various sources[95] but this now seems optimistic. A more likely outcome would be to anticipate vehicles at this level of autonomy in popular usage on our roads from around 2040, although, even still caution is advised owing to the difficulties self-driving vehicles have already encountered[96] -though there is evidence that the market for research and development in this area is still strong.[97]  

Liability for Accidents Caused while Self-Driving

Self-driving vehicles raise understandable issues around liability regimes. At present many vehicles in Ireland come equipped with a nascent form of driverless car technology, known as Advanced Driver Assistance Systems (“ADAS”), which bring significant up-side to reducing risk: advanced emergency braking systems can react faster than a human driver; blind-spot monitoring systems can afford superior peripheral vision;  forward collision warning systems monitor the road in front; parking sensors make parking safer; and tire pressure monitoring informs the driver when a tyre has low pressure. Yet these systems are only the beginning – the clue lies in the name – these are advanced driver assistance systems designed to assist the driver, while, its anticipated the vehicles of the future, using a more advanced form of ADAS, will not require a driver at all. Some depictions of driverless technology in the future show the driver sleeping in the rear of the vehicle while it’s in motion! Known as Level 5 technology this iteration has the potential to radically transform how we use the car, and potentially changes the car-ownership model – in the future many of us won’t even own cars at this level at all – rather they will be leased or hired. Still, imagine the liability issues around technology of this kind? What happens, for instance, when a vehicle in self-driving mode crashes into another vehicle and causes a fatality?

We already have an indication of how the authorities will view this. In 2019 a man named Aziz Riad, according to The New York Times, was driving in his Tesla in Autopilot mode during the course of which the vehicle left a freeway but maintained a high speed and crashed into a Honda Civic, killing two people in the other car – Gilberto Alcazar Lopez and Maria Guadalupe Nieves. The police charged Riad with manslaughter and he was subsequently sentenced to probation.[98] This gives an early indication of how the state will view this issue from a criminal law standpoint. 

Interestingly, in the United Kingdom, a report[99] by the Law Commission of England and Wales and the Scottish Law Commission in 2022, recommended introduction of a new Automated Vehicles Act,[100]enacted as the Automated Vehicles Act 2024,[101] to regulate vehicles that can drive themselves. It draws a clear distinction between features which just assist drivers, such as those already mentioned, and those that are self-driving. 

When a car is authorised by a regulatory agency as having “self-driving features” and those features are in-use, the person in the driving seat would no longer be responsible for how the car drives. Instead, the company or body that obtained the authorisation (an Authorised Self-Driving Entity) would face regulatory sanctions if anything goes wrong.

The person in the driving seat would no longer be a driver but a “user-in-charge”. A user-in-charge could not be prosecuted for offences which arise directly from the driving task.[102] However, the user-in-charge retains other driver duties, such as paying tolls.[103] There are still penalties: Section 53 creates an offence under existing Road Traffic legislation where a vehicle, at the time of its use, has no individual exercising, or in position to exercise, control of the vehicle.[104] The Law Commission considered liability applied in this way should be subject to review in the future as “more evidence of driver behaviour and capacity in relation to new technology becomes available”.[105]

The Law Commissions’ recommendations, and subsequent enactment of legislation, build on the civil reforms introduced by the 2018 Automated and Electric Vehicles Act in that jurisdiction which places liability, generally speaking, at the hands of the insurer of the vehicle (Section 2) – though the insurer can claim against any person “responsible for the accident” (Section 5). As part of its review of the operation of the 2018 Act the Law Commission of England and Wales and the Scottish Law Commission in its joint report felt that liability issues under that Act were “good enough for now”.[106] It’s worth pointing out, then, that in the event of accidents, civil liability to other road users will be met by insurers under the pre-existing 2018 Act while the 2024 Act further develops the concept of responsibility for driving offences pursuant to use of an automated vehicle. 

Closer Look at the Self-Driving Regulatory options

There are various legal issues which arise in respect of self-driving, or automated,[107] vehicles[108] and these are interesting from the standpoint of liability for Artificial Intelligence systems generally. One of the issues, already addressed within its UK context, touches on responsibility: who will be responsible in the event of an accident while the vehicle is self-driving? Other issues touch upon the roads’ architecture required by Governments and how to integrate these vehicles into the national fleet alongside more primitive vehicle-types we are all accustomed to driving today. While the issues are many and various this chapter will move to consider briefly the first of those issues mentioned – the question of liability for accidents. 

Ireland will have to decide how it treats the issue of liability for damage caused by accidents which occur as a result of a self-driving vehicle driving itself. There are already precedents abroad: the United Kingdom Parliament, as mentioned above, has enacted legislation on this point: The Automated and Electric Vehicles Act 2018 (UK),[109] s.2(1) states:

“Where—

(a) an accident is caused by an automated vehicle when driving itself on a road or other public place in Great Britain,

(b) the vehicle is insured at the time of the accident, and

(c) an insured person or any other person suffers damage as a result of the accident,

the insurer is liable for that damage.”

Nor can the insurer avoid liability pursuant to the terms of its own insurance policy with the insured as subsection (6) states: “Except as provided by section 4, liability under this section may not be limited or excluded by a term of an insurance policy or in any other way.”

Section 4 does permit the insurance policy to restrict the liability of the insurer in strictly limited cases: where the damage suffered occurs as a direct result of, either, prohibited software alterations made by, or with the knowledge of, the insured person, or, where there has been a failure to install safety-critical software updates that the insured person knows, or ought reasonably to know, are safety-critical. These clauses address, respectively, the two related issues of “jail-breaking” authorised software and what are known as “over-the-air” software updates which download and install authorised updates automatically provided the consent of the end-user is given. 

The Law Commission of England and Wales and the Scottish Law Commission highlighted the issue of the meaning of an accident “caused by” an automated vehicle in section 2(1). They state:

“For liability to arise under section 2(1), the accident must be “caused by” the automated vehicle. Section 8(3) adds that an accident includes “two or more causally related accidents” and that an accident caused by an automated vehicle includes “an accident that is partly caused by an automated vehicle”. Otherwise, the meaning of causation is left to the courts, applying the general principles developed in cases concerning civil liability.”[110]

Another notable section is Section 8(1) which defines the concept of the vehicle “driving itself” stating:

“(a) a vehicle is “driving itself” if it is operating in a mode in which it is not being controlled, and does not need to be monitored, by an individual.”

The owner of the vehicle (note the word “owner” and not driver) can be liable for damage caused as a result of an accident when a vehicle, which is not insured, is driving itself on a road or other public place in Great Britain (Section 2(2)). Considering it is anticipated most of these self-driving vehicles will be owned by ride-sharing companies, very few will be owned by private individuals, this provision effectively places liability at the hands of large corporations in the vehicle leasing sector. 

Section 5(1) then goes on to consider the right of an insurer to claim against a person responsible for the accident by stating:

“Where—

(a) section 2 imposes on an insurer, or the owner of a vehicle, liability to a person who has suffered damage as a result of an accident (“the injured party”), and

(b) the amount of the insurer’s or vehicle owner’s liability to the injured party in respect of the accident (including any liability not imposed by section 2) is settled,

any other person liable to the injured party in respect of the accident is under the same liability to the insurer or vehicle owner.”

These progressive provisions are among the first of their type anywhere in the world on the regulation of self-driving, or automated, vehicles, yet, the Government in the UK has already opted to further enhance its regulatory castle:

The Automated Vehicles Act 2024 shifts criminal liability: If a vehicle passes the self-drive test to be an authorised Automated Vehicle,[111] this then shifts criminal liability for road traffic offences away from the AV’s passengers, and onto the regulated licenced operators who become responsible for the AV’s journey.[112] The Act introduces the new concept of a “User in Charge”, who is the human in the vehicle ready to take back control from the vehicle if it issues a transition demand. In some circumstances the User in Charge may still be liable (Section 48). As recommended by the Law Commissions in their detailed recommendations, the manufacturer will be liable for how the “self-driving” car drives, and the human “driver” will effectively be immune from prosecution in the event of an accident (Section 47) though, as mentioned, there are instances where the immunity would not apply – transitions between the human and the vehicle, for instance. (Section 48) 

The option taken by the United Kingdom of standalone legislation on the issue of self-driving vehicles will have been watched closely in the EU.[113]  However the United Kingdom position is not the only option available to the Irish (or EU) legislature when it comes to address the question of self-driving vehicles. Carrie Schroll puts forward[114] the idea that liability for any accidents involving self-driving cars should be eliminated, and recommends the creation instead in the United States of America of a National Insurance Fund to pay for all damages resulting from those accidents.

The author states:

“Car insurance is not the only cost associated with car accidents. Litigation is expensive, so each legal dispute over fault in an accident costs the parties substantially. By banning litigation and using the Fund exclusively, injured parties will avoid the exorbitant costs of litigation.”[115]

This issue, the question of liability for self-driving accidents, is a divisive one in that jurisdiction. The Bill of the United States Congress Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution Act, otherwise known as the “Self-Drive Act”, is a Bill which seeks to address the issue of accident liability when it passed unanimously through the United States of America House of Representatives in 2017 only to fall in the Senate amid concerns by Democrats and lawyers over the right to sue if someone is hurt or killed in a self-driving car. The Bill seeks to protect auto manufacturers and technology companies from legal responsibility and was subsequently resurrected in September 2020 and was referred to the Subcommittee on Consumer Protection and Commerce in 2021.[116]

Certainly the United Kingdom is ahead on the issue of regulation for self-driving vehicles. It has devoted a significant level of resources to consideration of the issue by the Law Commission of England and Wales and the Scottish Law Commission and has already enacted two standalone statutes on the matter, as mentioned above. In some respects the legislative intervention is well ahead of the market: this book has noted elsewhere that significant technical difficulties have been encountered by those involved in the deployment of vehicles of this type on our roads, and, tragically, there have already been fatalities. Coupled together these outcomes may have dampened progress in this area – though as already mentioned the market for technology of this type is still generating investment[117] and recent reports indicate their roll-out is continuing.[118]   

Passenger Name Records

Another example given by Wagner[119] of an area where there is a human in-the-loop is that of police officers conducting passenger searches based on algorithms used to analyse both Passenger Name Records (PNRs) and social media data. The purpose of the searches is to identify criminals on flights using both the PNRs and social media data which enriches the PNR data. This matching is not made with 100% certainty but rather the algorithms generate statistical probabilities of individuals likely to be part of a certain group of criminals. Police officers then decide which threshold of probability they wish to attain before intervening. This can then result in particular persons being selected by police officers for further inspection at the border. The boundary is consequently blurred between whether police officers make the decision themselves or whether that decision is made by an automated system. 

Wagner states:

“The question that needs to be asked in this context is what the legal ramifications are for an individual police officer in a specific legal jurisdiction of receiving a recommendation by the PNR system. Is the police officer required to interpret the response of the automated system as a “tip” and investigate it? Or are they able to ignore the results of the system if they believe the individual to not pose a threat or to have been falsely identified?

In general, the amount of time which border guards have to ascertain whether a traveller is a threat or not is relatively short. For example, the European Union (EU) Border agency Frontex suggests that an “EU border guard has on average just 12 seconds to decide whether the traveller in front of them is legitimate or not” (…) It seems highly unlikely that border guards would have significantly longer to assess the results of an automated system. Furthermore, based on conversations with individuals knowledgeable about the matter, it seems highly unlikely that a police officer would not follow any individual leads provided by the automated system (…). This is in part because police forces use individual searches to calibrate the system and ensure that they are targeting the right groups or individuals. Thus, each search is not just a search for the purpose of finding criminal activity, it also contributes to testing a police hypothesis about likely criminals, for which both positive and negative responses are important to validate the hypothesis. (…)

In consequence, if an individual police officer is not actively able to ignore a specific individual recommendation of an algorithm to search a person at the border, this also means that the decisions made by the algorithm are de facto automated. While the “human in the loop” is of course necessary to conduct a search, the police officer involved also ends up becoming liable for all decisions made by that algorithm, because—at least formally—they were made by a human (…) Thus, the decision to search an individual is essentially made in a quasi-automated fashion, which is essentially automated and includes a human in the loop who does not have active agency at an individual level. If the automated system makes a mistake in correctly or incorrectly identifying a criminal, its results are likely to be optimized on this basis. However, there is currently no framework in which software developers could be held liable for the errors made by this system. By contrast, police officers are held liable personally and directly for any mistakes made.”[120]

Yang et al in a paper[121] considers the main ways that automated decision-making is used at EU borders, and, whether such use poses a risks related to human rights. The paper identified the use of automated decision making at frontiers and three broad categories of human rights risks which arose from that use: privacy and data protection, non-discrimination and a fair trial and effective remedies.[122]  The EU AI Act classifies AI use at borders as High Risk as per Annex III. Recital 60 states:

“AI systems used in migration, asylum and border control management affect persons who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee respect for the fundamental rights of the affected persons, in particular their rights to free movement, non-discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk, insofar as their use is permitted under relevant Union and national law, AI systems intended to be used (…) in the fields of migration, asylum and border control management (…)”

Content Moderation

Another example of human-in-the-loop hybrid system management is social media content moderation. Wagner gives the example of the platform Facebook and says that it employs numerous filters and algorithms to decide what content individual users should see: meaning a large part of the platform is automated. However content moderation is one area that does require human input. These moderators respond to complaints made by users about the content that they see on the platform. There are another group of humans responsible for filtering out photographs which are considered contrary to the platforms community standards. This process has given rise to debate about the decisions taken in respect of different  image types. Facebook is said to base almost all its content moderation decisions on its terms of service as opposed to applicable law.[123]  Wagner asks why the human moderators are there at all and states:

“However, it can also be suggested that much of their work is not actually there to contribute to human decision making, but rather to suggest that humans—both the users of Facebook and the staff at Facebook—actually have agency.”[124]

This is in circumstances where the system of escalation is “a complete black box to Facebook users and the general public.”

“They have no way of knowing whether or not Facebook will respond to an individual complaint, how it will respond, and whether a human will be tasked with any such response.”[125]

GDPR was mentioned insofar as there maybe rights under that Regulation to an explanation in respect of automated decision making.[126] However Wagner notes aptly that insofar as such a right is applicable it would only apply to automated decisions and not to human-in-the-loop involvements: “Thus, it is conceivable that by involving a “human in the loop,” companies could avoid this right to explanation.”[127]

Automated decisions, it should be noted, are covered under Art. 22 of GDPR:

  1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
  2. Paragraph 1 shall not apply if the decision:
    1. is necessary for entering into, or performance of, a contract between the data subject and a data controller;
    1. is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or
    1. is based on the data subject’s explicit consent.
  3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.
  4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.

In SCHUFA[128] the Court of Justice of the European Union (CJEU) looked at the meaning of the word “decision” to determine at what point in the process the rights granted under automated decision-making applied. It held that a broad definition of the term should be given which covers not just the decision (to approve a loan) but which goes further back in the process (to credit reference agency probability values). Interestingly the court reasoned that if the credit reference agency was not subject to the automated decision making rules then those affected by the ultimate decision would have no way to exercise their rights in respect of that process – including: the right to obtain meaningful information about the logic involved.[129]

Future Liability for a Robot?

In terms of the development of Artificial Intelligence systems many in this field see the technology evolving to a point where artificial intelligence acquires human or above-human intelligence. This issue is dealt with in more detail in the following chapter on superintelligence. The question would then arise of assigning responsibility for autonomous decisions made by the artificial intelligence. While this might seem far-fetched it wasn’t considered far-fetched to the EU who, at one stage, proposed liability for a robot.[130]  The Hon. Katherine Forrest (Fmr) in an article considers the ethics and challenges of Legal Personhood for AI.[131] She describes the future likely outcome of AI achieving sentience which she defines as “some combination of cognitive intelligence that includes the ability to solve problems that one has never previously encountered, and to have a sense of self-awareness and awareness of where one fits in the broader world.”

“Some will never concede that AI has or can achieve any form of sentience, persisting in the belief that sentience is a uniquely human quality. But others will recognize advanced AI for what it is—that it will understand its place in the world, its surroundings, what it is, and what we are in relation to it, and that it will be as smart or smarter than we are. AI may then be able to perceive variances in its condition or treatment that we might characterize as having an emotive quality. Frankly, we just don’t know all that sentient AI will or can be. But it may deserve ethical considerations that we have previously reserved mostly, but not entirely, for humans.”[132]

The author continues that when society is confronted with sentient AI we will need to decide whether it has legal status and she considers the time to consider the issues around this idea should be raised now. She considers a future scenario called “model drift” where an AI model is trained to perform in a particular way but over time other processes can cause it to “drift” away from its original purpose without human intervention. “If harm is caused, courts may analogize the situation to a potentially known hazard or harm that could occur and use negligence principles to tether it back to a responsible human.” In an even more complicated case humans will be unable to trace the actions of an AI tool to a human design element: a concept call emergent capabilities. 

“Nevertheless, courts may turn to the corporate or educational entity associated with the tool—the owner of the tool if you will. If the AI tool is considered a legal agent of the entity, this entity would typically bear responsibility under agency principles.”[133]

Further still along the spectrum to a point where an AI acts in a manner that neither its original designer, licensor, nor licensee ever intended. This, she describes as an ultra vires action.

“Today, in such a case, a human employee may be held responsible—because that individual actor, the human who purported to act in the corporation’s name, exceeded the bounds of his or her authorization. That individual may therefore incur personal liability. But in the scenario I am positing, there will be no such human. The “being” which will have acted outside of the scope of their authorization will be a nonperson. What then is a court to do—either analytically or practically? The initial framework could well be to tie the AI’s actions back to the “person” closest in the chain of causation under the theory that autonomous actions were a known and assumed risk. In this way, for some series of cases, assumption of the risk of autonomous activity can allow courts to work within a known framework. But the cases will get even more complicated from there. Among the complexities will be the harms caused by distributed AI and the harms caused by intentionally acting, sentient AI.”[134]

Coupled with this is the idea that an AI will not exist in a single place but will be distributed – the software will be spread over a number of unrelated computers. In effect the AI will have omnipresence. 

On the issue of assignment of legal responsibility, then, we are effectively looking at the future issue of liability for a deployed robot: in other words is the robot itself liable for its own actions or is someone else liable for those actions. In this respect many commentators begin with a reference to the so-called rules of Issac Asimov, a writer of the robot stories from the twentieth century, (… “1. A robot may not injure a human being… 2. A robot must obey the orders given… 3. A robot must protect its own existence…”), however these seem in-apt, principally as these rules were never meant to have any application in either a legal, or, even, an industrial environment. In any event it doesn’t seem prescient to expect an AGI could simply be launched with a set of accompanying laws of this type – we’re into the realms of science fiction.

Of more interest, however, the reader is referred to the developments in this space from within the EU where the legal personality of the AGI was put in issue. In 2016 a report by the European Parliament’s Committee on Legal Affairs went as far as suggesting the adoption of a separate status of “electronic personhood” to accommodate legal responsibility of AI. This was much criticised at the time, many saw it as a move towards assigning legal responsibility to the robot.[135] While, ultimately, a defence of sorts on the finer details of the point was published in 2020, the proposal was not adopted.[136]

Kate Darling, in her succinct overview of this area in The New Breed[137] makes the important point that we may not need to re-invent the rules for AI, or AGI, at all – the matter can be dealt with under existing legal structures, citing product liability rules, and the equivalent treatment of animals in the law under the scienter principle. That rule denotes the occasion when the keeper of an animal is liable for any damage caused by that animal if the animal is, either,  a “wild animal” (fera natura) or, if being a “tame animal” (mansueta natura) it has a vicious propensity known to the keeper.[138]

The author states:

“Today, as robots start to enter into shared spaces …  it is especially important to resist the idea that the robots themselves are responsible, rather than the people behind them. I’m not suggesting that there are more ways to think about the problem than trying to make the machines into moral agents. Trying on the animal analogy reveals that this is perhaps not as historic a moment as we thought, and the precedents in our rich history of assigning responsibility for unanticipated animal behaviour could, at the very least, inspire us to think more creatively about responsibility in robotics.”[139]

This is a very valid point and should be carefully considered by policy makers, whoever they are, when AGI is imminent. 

Contract Formation

Concomitant with the issue of liability per se is the area of the use of Artificial Intelligence in the formation of contracts. One source considers the issue and concludes there are no obstacles as such to such use:

“The resulting conclusion is that existing contract law does not seem to contain fundamental obstacles to the use of AI systems in the process of contract formation. Although uncertainty remains, e. g., as to the circumstances in which the use of an AI system may be considered diligent or reasonable, this uncertainty is not due to fundamental legal obstacles.”[140]

Frattone in an article[141] notes that automated contract formation is not new or novel and can be traced back to the advent of vending machines[142] and dates back 40 years with respect to online commerce. She notes that automated contract formation is used now more than ever but that computers are not “infallible” with the most common those falling under the heading of algorithmic mistakes – where, for instance, the code can be flawed as a result of programming errors. Furthermore AI agents involved in the process can display ‘emergent behaviour’ and act in a way not predicted by its creator. She notes that UNCITRAL is considering the enactment of specific measures on computer errors[143] and that the topic of automation was considered recently by a Court in Singapore[144] on the issue of automated transactions in cryptocurrencies: ultimately the matter was decided on the basis of the platforms terms of service and could not be voided.

Another issue arises in relation to the law on mistake in contract. The author states:

“Arguably, the law on mistake can be applied by analogy to algorithmic mistakes on a case-by-case basis. However, mistake in contract law does not usually encompass circumstances that affect the economic convenience of the deal without impinging upon essential elements of the contract. Accordingly, the case of a blatantly absurd price set by a computer program would fall outside the scope of mistake. Moreover, not all types of algorithmic mistakes match the legal definition of mistake. Finally, there exist significant differences in the application of the law on mistake in domestic jurisdictions that would undermine legal certainty as to cross-border automated contracting. Therefore, the introduction of a specific provision on algorithmic mistakes into e-commerce law could be considered for promoting legal certainty and fostering private law harmonization.”[145]     

Conclusion

In terms of liability the European Commission’s proposed Directive for Artificial Intelligence Liability (now defunct) looked to compliment the proposed EU AI Act[1] and any further measures which arise in the future which require bespoke regulation in this space – such as those which may be needed in respect of “self-driving” vehicles. Those measures would have been timely: Artificial Intelligence, is becoming more and more prominent in our lives and it seems with each passing week we are witnessing the result of exponential growth in this industry. We are now on the cusp of the Age of Artificial Intelligence and it’s important that our regulations continue to be effective, appropriate, and apace with developments. 


[1] See Chapter 9


[1] “Google disclaims any and all responsibility or liability for the accuracy, content, completeness, legality, reliability, or operability or availability of information or material displayed in the Google Search Services results.” https://policies.google.com/terms/archive/20010606?hl=en#

[2] Rimkute, Deimante. “AI and Liability in Medicine: The Case of Assistive-Diagnostic AI.” Baltic Journal of Law and Politics, vol. 16, no. 2, February 2024, pp. 64-81. HeinOnline, https://heinonline-org.ucd.idm.oclc.org/HOL/P?h=hein.journals/bjlp16&i=253. The article states: “While the collaboration between assistive-diagnostic AI and humans may improve the identification of potential pathologies, it may also introduce the risk of misdiagnosis due to errors from either the AI or the doctor. Such scenarios raise questions about the liability of doctors or AI producers themselves.”(at 65) For treatment of some of the tools available to physicians see Horak, Jakub, et al. “Healthcare Generative Artificial Intelligence Tools in Medical Diagnosis, Treatment, and Prognosis.” Contemporary Readings in Law and Social Justice, vol. 15, no. 1, July 2023, pp. 81-98. HeinOnline, https://heinonline-org.ucd.idm.oclc.org/HOL/P?h=hein.journals/conreadlsj15&i=73.

[3] Johnson, Vincent R. “Artificial Intelligence and Legal Malpractice Liability.” St. Mary’s Journal on Legal Malpractice and Ethics, vol. 14, no. 1, 2024, pp. 55-93. HeinOnline, https://heinonline-org.ucd.idm.oclc.org/HOL/P?h=hein.journals/smjmale14&i=66 at 55.

[4] Padovan, P.H., Martins, C.M. & Reed, C. Black is the new orange: how to determine AI liability. Artif Intell Law 31, 133–167 (2023). https://doi.org/10.1007/s10506-022-09308-9

[5] One author refers to this prospect when she states: “The definition of the subject of liability is very important to investigate the tort liability and compensate the victim. Typically, the responsible parties include AI developers, providers, users, and other parties that may be involved. The liability of each subject shall be determined according to its degree of control over the infringement, its degree of participation and its degree if fault.” Ding Ling, Analysis on Tort Liability of Generative Artificial Intelligence. Science of Law Journal (2023) Vol. 2: 102-107. DOI: http://dx.doi.org/DOI: 10.23977/law.2023.021215 at 105

[6] Zirpoli, CRS Legal Sidebar (February 23, 2023) 10922 see https://crsreports.congress.gov/product/pdf/LSB/LSB10922

[7] Henderson, Hashimoto, Lemley, Where’s the liability in harmful speech? 3 J. Free Speech L. 589 (2023)

[8] LLC, No. 1:23-cv-03122 (N.D. Ga.)

[9] https://news.bloomberglaw.com/ip-law/openai-fails-to-escape-first-defamation-suit-from-radio-host

[10] https://www.crowell.com/en/insights/client-alerts/can-ai-defame-we-may-know-sooner-than-you-think

[11] https://www.crowell.com/en/insights/client-alerts/can-ai-defame-we-may-know-sooner-than-you-think

[12] https://www.crowell.com/en/insights/client-alerts/can-ai-defame-we-may-know-sooner-than-you-think

[13] https://www.irishtimes.com/opinion/2024/01/24/dave-fannings-defamation-case-is-latest-in-a-wave-of-ai-related-litigation/

[14] Ibid.

[15] https://www.dlapiper.com/en-at/insights/publications/2024/03/explainability-misrepresentation-and-the-commercialization-of-artificial-intelligence

[16] See Bridges, Khiara M. “Race in the Machine: Racial Disparities in Health and Medical AI.” Virginia Law Review, vol. 110, no. 2, April 2024, pp. 243-340. HeinOnline, https://heinonline-org.ucd.idm.oclc.org/HOL/P?h=hein.journals/valr110&i=250.

[17] See also the article Thomas, AI: The five biggest risks for barristers, Counsel, October 2024 at p. 22 which mentions bias and discrimination as the second highest AI-related risk for a barrister. 

[18] https://www.dlapiper.com/en-at/insights/publications/2024/03/explainability-misrepresentation-and-the-commercialization-of-artificial-intelligence

[19] Padovan, P.H., Martins, C.M. & Reed, C. Black is the new orange: how to determine AI liability. Artif Intell Law 31, 133–167 (2023). https://doi.org/10.1007/s10506-022-09308-9

[20] Ibid.

[21] https://www.pinsentmasons.com/out-law/news/air-canada-chatbot-case-highlights-ai-liability-risks

[22] Bias in algorithms has been the subject of a 2022 report by the European Union Agency for Fundamental Rights wherein it refers to algorithms used for identification of hate speech and finds owing to the sheer volume of data algorithms are essential: “If a certain message is hateful, this can most readily be judged by the person it is addressed to. And the way it is judged may differ between people. Hence, there is no universal assessment of offensiveness of certain phrases. There are significant differences in assessing content as offensive based on the demographics of those assessing the content. For example, what a man may not consider offensive may very well be perceived as offensive by a woman, or the other way round. This challenges the quality and usefulness of data with fixed labels of offensiveness. Therefore, a final assessment of the hatefulness of online content should be made by humans. However, the practicality of this, given the volume of online data content is seemingly insurmountable. The sheer volume of online content that large platforms have to deal with necessitates the support of their content moderation activities by algorithms.” https://fra.europa.eu/sites/default/files/fra_uploads/fra-2022-bias-in-algorithms_en.pdf

[23] https://www.mhc.ie/latest/insights/artificial-intelligence-and-the-impact-on-hr-practices

[24] Ibid.

[25] https://www.dlapiper.com/en-at/insights/publications/2024/03/explainability-misrepresentation-and-the-commercialization-of-artificial-intelligence

[26] See Ding Ling, Analysis on Tort Liability of Generative Artificial Intelligence. Science of Law Journal (2023) Vol. 2: 102-107. DOI: http://dx.doi.org/DOI: 10.23977/law.2023.021215 at 104.

[27] Ibid. 

[28] Ibid.

[29] Ibid.

[30] See section below. 

[31] Vasudevan, Amrita. Addressing the Liability Gap in AI Accidents. Centre for International Governance Innovation, 2023. JSTOR, http://www.jstor.org/stable/resrep52623. Accessed 2 June 2024 at p. 1.

[32] Ibid at 106.

[33] Rimkuté, Deimante. “AI and Liability in Medicine: The Case of Assistive-Diagnostic AI.” Baltic Journal of Law and Politics, vol. 16, no. 2, February 2024, pp. 64-81. HeinOnline, https://heinonline-org.ucd.idm.oclc.org/HOL/P?h=hein.journals/bjlp16&i=253. The article states: “While the collaboration between assistive-diagnostic AI and humans may improve the identification of potential pathologies, it may also introduce the risk of misdiagnosis due to errors from either the AI or the doctor. Such scenarios raise questions about the liability of doctors or AI producers themselves.”(at 65) For treatment of some of the tools available to physicians see Horak, Jakub, et al. “Healthcare Generative Artificial Intelligence Tools in Medical Diagnosis, Treatment, and Prognosis.” Contemporary Readings in Law and Social Justice, vol. 15, no. 1, July 2023, pp. 81-98. HeinOnline, https://heinonline-org.ucd.idm.oclc.org/HOL/P?h=hein.journals/conreadlsj15&i=73.

[34] One source puts the number of approved AI tools in medicine at 962 up to October 2023. https://spyro-soft.com/blog/healthcare/regulation-of-ai-in-healthcare-in-2024-eu-and-fda-approaches

[35] Ibid at 65.

[36] Ibid at 66.

[37] Ibid at 68.

[38] Ibid.

[39] Ibid at 69.

[40] Ibid at 70.

[41] Ibid at 71

[42] Ibid.

[43] See section underneath. See also https://www.mhc.ie/latest/insights/eu-product-liability-reform-for-ai-systems-and-ai-enabled-digital-health-products#:~:text=Broader%20scope%20of%20the%20PLD,the%20scope%20of%20the%20PLD which states: “digital health stakeholders with products on the EU market should carefully consider their potential liability exposure under the PLD”.

[44] See generally Guido Noto La Diega, Leonardo C T Bezerra, Can there be responsible AI without AI liability? Incentivizing generative AI safety through ex-post tort liability under the EU AI liability directive, International Journal of Law and Information Technology, Volume 32, Issue 1, 2024, eaae021, https://doi.org/10.1093/ijlit/eaae021

[45] Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) which can be accessed https://commission.europa.eu/system/files/2022-09/1_1_197605_prop_dir_ai_en.pdf

[46] European Parliament Resolution with Recommendations to the Commission on Civil Law Rules on Robotics

(2015/2103(INL)) (European Parliament, 16 February 2017), para 59(f).

[47] See above

[48] For example, it does not necessarily follow that the Proposed Directive is intended to cover questions around liability for Artificial General Intelligence (“AGI”): this is something the EU, ultimately, will need to be more explicit about for its Member States. For general overview of Artificial General Intelligence see Byrne, John P “Regulating AI” in The Bar Review, Volume 28, Number 1, February 2023 at p. 12

[49] Explanatory Memorandum to the Proposed Directive. 

[50] The Future of Life Position Paper on AI Liability available to access at https://futureoflife.org/wp-content/uploads/2022/11/FLI_AI_Liability_Position_Paper.pdf

[51] COM(2022) 495 – Proposal for a directive of the European Parliament and of the Council on liability for defective products at 4 (6) where it states: “‘damage’ means material losses resulting from: (a) death or personal injury, including medically recognised harm to psychological health; (b) harm to, or destruction of, any property, except: (i) the defective product itself; (ii) a product damaged by a defective component of that product; (iii) property used exclusively for professional purposes; (c) loss or corruption of data that is not used exclusively for professional purposes;” available to access at https://single-market-economy.ec.europa.eu/system/files/2022-09/COM_2022_495_1_EN_ACT_part1_v6.pdf

[52] https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807

[53] Article 1(2) of the Proposed Directive. The Commission states: “The purpose of the AI Liability Directive is to lay down uniform rules for access to information and alleviation of the burden of proof in relation to damages caused by AI systems, establishing broader protection for victims (be it individuals or businesses), and fostering the AI sector by increasing guarantees. It will harmonise certain rules for claims outside of the scope of the Product Liability Directive, in cases in which damage is caused due to wrongful behaviour. This covers, for example, breaches of privacy, or damages caused by safety issues. The new rules will, for instance, make it easier to obtain compensation if someone has been discriminated in a recruitment process involving AI technology” accessible at https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807

[54] Article 1(3b) of the Proposed Directive. 

[55] Guido Noto La Diega, Leonardo C T Bezerra, Can there be responsible AI without AI liability? Incentivizing generative AI safety through ex-post tort liability under the EU AI liability directive, International Journal of Law and Information Technology, Volume 32, Issue 1, 2024, eaae021, https://doi.org/10.1093/ijlit/eaae021, see abstract. The authors also state: 

[56] Article 2(9)  of the Proposed Directive. 

[57] Explanatory Memorandum to the Proposed Directive.

[58] Explanatory Memorandum to the Proposed Directive.

[59] See Ana Rački Marinković, Liability for AI-related IP infringements in the European Union, Journal of Intellectual Property Law & Practice, 2024;, jpae061, https://doi.org/10.1093/jiplp/jpae061

[60] Explanatory Memorandum to the Proposed Directive.

[61] Article 5 of the Proposed Directive. 

[62] European Union Intellectual Property Office (EUIPO), ʽStudy on the impact of artificial intelligence on the infringement and enforcement of copyright and designsʼ (2022) 64. Available at www.euipo.europa.eu/en/publications/study-on-the-impact-of-artificial-intelligence-on-the-infringement-and-enforcement-of-copyright-and-designs (accessed 10 June 2024).

[63] Ana Rački Marinković, Liability for AI-related IP infringements in the European Union, Journal of Intellectual Property Law & Practice, 2024;, jpae061, https://doi.org/10.1093/jiplp/jpae061 at p. 2

[64] Directive on Liability for Defective Products which can be accessed https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52022PC0495

[65] https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807. For example: Article 10 of the Proposed New Products Liability Directive states that no exemption from liability shall apply to the benefit of the manufacturer of the product in circumstances where “the defectiveness of the product is due to any of the following, provided that it is within the manufacturer’s control: 

(a) a related service; 

(b) software, including software updates or upgrades; or

(c) the lack of software updates or upgrades necessary to maintain safety.” 

[66] Article 4(1) of the Proposed New Products Liability Directive. 

[67] See further down this chapter

[68] See Baris Soyer, Andrew Tettenborn, Artificial intelligence and civil liability—do we need a new regime?, International Journal of Law and Information Technology, 2023;, eaad001, at p. 7 https://doi.org/10.1093/ijlit/eaad001

[69] Submission published on Linked In and available:https://www.linkedin.com/posts/thebarofireland_bar-of-ireland-submission-to-aild-activity-7026510281937678336-peEW/?originalSubdomain=ie

[70] Elina Nerantzi, Giovanni Sartor, ‘Hard AI Crime’: The Deterrence Turn, Oxford Journal of Legal Studies, 2024; https://doi.org/10.1093/ojls/gqae018

[71] Sabine Gless,  Emily Silverman,  Thomas Weigend, If Robots cause harm, Who is to blame? Self-driving Cars and Criminal Liability, New Criminal Law Review (2016) 19 (3): 412–436  https://doi.org/10.1525/nclr.2016.19.3.412 where the authors argue in favour of limiting the criminal liability of operators to situations where they neglect to undertake reasonable measures to control the risks emanating from robots. See also Fletcher, Deterring Algorithmic Manipulation, 74 Vanderbilt Law Review 259 (2021) Available at https://scholarship.law.vanderbilt, edy/vlv/vol74/iss2/2 where the author also focuses on market manipulation and states: “Importantly, the law’s failure to deter algorithmic manipulation undermines market stability, exposing the market to a significant source of systemic risk”. (at p.325. Another source points to new ethical concerns arise if AI persuades people to behave dishonestly. Leib, Margarita, et al. Corrupted by Algorithms?: How AI-Generated and Human-Written Advice Shape (Dis)Honesty. IZA – Institute of Labor Economics, 2023. JSTOR, http://www.jstor.org/stable/resrep57648. Accessed 2 June 2024 at p. 1.

[72] Ibid

[73] Ibid. See  Sabine Gless,  Emily Silverman,  Thomas Weigend, If Robots cause harm, Who is to blame? Self-driving Cars and Criminal Liability, New Criminal Law Review (2016) 19 (3): 412–436 at pg. 416 https://doi.org/10.1525/nclr.2016.19.3.412 where the authors state:

“The issue of a robot’s potential responsibility leads us back to the fundamental question of what it means to be a ‘‘person.’’ Philosophers have long debated this question and have come to different conclusions. One approach has based personhood on the capacity for self-reflection. John Locke, for example, wrote that an ‘‘intelligent Agent,’’ meaning a human person, must be ‘‘capable of a Law, and Happiness and Misery.’’”

[74] Ibid

[75] https://www.technologyreview.com/2024/05/10/1092293/ai-systems-are-getting-better-at-tricking-us/

[76] See  Sabine Gless,  Emily Silverman,  Thomas Weigend, If Robots cause harm, Who is to blame? Self-driving Cars and Criminal Liability, New Criminal Law Review (2016) 19 (3): 412–436 at pg. 423 https://doi.org/10.1525/nclr.2016.19.3.412

[77] Abbott and Sarch, Punishing Artificial Intelligence: Legal Fiction or Science Fiction (2019) 53 (1) UC Davis L. Rev 323.

[78] Ibid at 375.

[79] For a recent example see: Robert J. Rhee, Do AIs Dream of Electric Boards?, 119 Nw. U. L. Rev. 1007 (2025).

https://scholarlycommons.law.northwestern.edu/nulr/vol119/iss4/4

[80] https://www.europarl.europa.eu/RegData/etudes/STUD/2020/621926/IPOL_STU(2020)621926_EN.pdf

[81] Ibid at 38.

[82] Matthew Dahl, Varun Magesh, Mirac Suzgun, Daniel E Ho, Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models, Journal of Legal Analysis, Volume 16, Issue 1, 2024, Pages 64–93, https://doi.org/10.1093/jla/laae003

[83] Ibid at 89. For example: asking the LLM to provide information about an author’s dissenting opinion in an appellate case in which they did not in fact dissent and asking the LLM to furnish the year that a Supreme Court of the United States of America case that has never been overruled was overruled. (Ibid at 82). 

[84] Wagner, Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems 11 POL’Y & INTERNET 104, 108–09 (2019) available at: https://onlinelibrary.wiley.com/doi/10.1002/poi3.198

[85] Ibid at 116.

[86] “However, existing legal rules that, for example, forbid or allow certain forms of automation do so on the assumption that a “human in the loop” means that an actual human “check” will take place of the results of the automated system. If the person is able to only rubber-stamp the results produced by the algorithm, then these systems should perhaps more accurately be called “quasi-automated.” Wagner, Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems 11 POL’Y & INTERNET 104, 108–09 (2019) available at: https://onlinelibrary.wiley.com/doi/10.1002/poi3.198

[87] Araujo, Theo & Helberger, Natali & Kruikemeier, Sanne & de Vreese, Claes. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & SOCIETY. 35. 10 see https://www.researchgate.net/publication/338332492_In_AI_we_trust_Perceptions_about_automated_decision-making_by_artificial_intelligence/citation/download

[88] The issue of ethics and AI is discussed in another chapter.

[89] Article 29 Working Party Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (2017)

[90] See  Sabine Gless,  Emily Silverman,  Thomas Weigend, If Robots cause harm, Who is to blame? Self-driving Cars and Criminal Liability, New Criminal Law Review (2016) 19 (3): 412–436 https://doi.org/10.1525/nclr.2016.19.3.412

[91] https://www.irishtimes.com/business/transport-and-tourism/jaguar-land-rover-to-partner-with-autonomous-car-hub-in-shannon-1.4409884

[92] https://www.theguardian.com/technology/2016/jul/01/tesla-driver-killed-autopilot-self-driving-car-harry-potter

[93] https://www.racfoundation.org/wp-content/uploads/Driver_training_for_future_automated_vehicles_Shaw_Large_Burnett_October_2020.pdf

[94] Wagner, Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems 11 POL’Y & INTERNET 104, 108–09 (2019) available at: https://onlinelibrary.wiley.com/doi/10.1002/poi3.198 at p 109.

[95] https://www.mckinsey.com/industries/automotive-and-assembly/our-insights/disruptive-trends-that-will-transform-the-auto-industry/de-de#

[96] One article emphasises that currently cars of this type still require human assistance:https://www.nytimes.com/2024/09/11/insider/when-self-driving-cars-dont-actually-drive-themselves.html

[97] London-based AI maker Wayve raised $1 billion in May 2024 for further development of its self-driving technology. https://www.nytimes.com/2024/05/06/technology/wayve-ai-self-driving-vehicles.html

[98] https://fortune.com/2023/12/15/tesla-driver-to-pay-23k-in-restitution-crash-killed-2-people/#

[99] https://s3-eu-west-2.amazonaws.com/cloud-platform-e218f50a4812967ba1215eaecede923f/uploads/sites/30/2022/01/Automated-vehicles-joint-report-cvr-03-02-22.pdf

[100] Ibid at chapter 2. 

[101] https://www.legislation.gov.uk/ukpga/2024/10/contents/enacted

[102] Section 47. The Law Commissions refer to: dangerous driving to exceeding the speed limit or running a red light (Joint Report of the Law Commission of England and Wales and the Scottish Law Commission Automated Vehicles (2022) at pg 148 available at https://s3-eu-west-2.amazonaws.com/cloud-platform-e218f50a4812967ba1215eaecede923f/uploads/sites/30/2022/01/Automated-vehicles-joint-report-cvr-03-02-22.pdf)

[103] Section 48.

[104] Section 53 which inserts a new section 34B into the Road Traffic act 1988. See also Joint Report of the Law Commission of England and Wales and the Scottish Law Commission Automated Vehicles (2022) at pg 254 available at https://s3-eu-west-2.amazonaws.com/cloud-platform-e218f50a4812967ba1215eaecede923f/uploads/sites/30/2022/01/Automated-vehicles-joint-report-cvr-03-02-22.pdf

[105] Ibid at 51.

[106] Joint Report of the Law Commission of England and Wales and the Scottish Law Commission Automated Vehicles (2022) at pg 8 available at https://s3-eu-west-2.amazonaws.com/cloud-platform-e218f50a4812967ba1215eaecede923f/uploads/sites/30/2022/01/Automated-vehicles-joint-report-cvr-03-02-22.pdf

[107] The proposed Automated Vehicles Bill 2023/2024(UK)  seeks to adopt the Law Commission position of denoting vehicles of this type as “automated vehicles” instead of “self-driving” vehicles. See https://commonslibrary.parliament.uk/research-briefings/cbp-9973/#:~:text=The%20Automated%20Vehicles%20Bill%20%5BHL,driving%20vehicles%20in%20Great%20Britain.

[108] The Law Commission of England and Wales jointly with the Scottish Law Commission is extensively addressing a variety of issues in this area: https://www.lawcom.gov.uk/project/automated-vehicles/

[109] https://www.legislation.gov.uk/ukpga/2018/18/section/4/enacted

[110] Law Commission of England and Wales and Scottish Law Commission, Automated Vehicles a Joint Consultation Paper, at 6.40.

[111] See Section 3. 

[112] https://commonslibrary.parliament.uk/research-briefings/cbp-9973/#:~:text=The%20Automated%20Vehicles%20Bill%20%5BHL,driving%20vehicles%20in%20Great%20Britain.

[113] https://www.gov.uk/government/consultations/self-driving-vehicles-new-safety-ambition

[114] Carrie Schroll, Splitting the Bill: Creating a National Car Insurance Fund to Pay for Accidents in Autonomous Vehicles, 109 Nw. U. L. Rev. 803 (2015) https://scholarlycommons.law.northwestern.edu/nulr/vol109/iss3/8/

[115] Ibid at 829.

[116] https://www.congress.gov/bill/117th-congress/house-bill/3711#:~:text=The%20bill%20preempts%20states%20from,standards%20identical%20to%20federal%20standards.

[117] See section on Collective Superintelligence above. See https://www.nytimes.com/2024/05/06/technology/wayve-ai-self-driving-vehicles.html

[118] https://www.ft.com/content/423d1bd8-75b7-49a7-8ece-a4b7f0bc6dca and https://www.nytimes.com/2024/07/22/business/softbank-self-driving-cars.html

[119] Wagner, Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems 11 POL’Y & INTERNET 104, 108–09 (2019) available at: https://onlinelibrary.wiley.com/doi/10.1002/poi3.198 at p 110 et seq.

[120] Ibid at 111.

[121] Yang, Yiran and Zuiderveen Borgesius, Frederik and Beckers, Pascal and Brouwer, Evelien, Automated Decision-making and Artificial Intelligence at European Borders and Their Risks for Human Rights (April 10, 2024). Available at SSRN: https://ssrn.com/abstract=4790619 or http://dx.doi.org/10.2139/ssrn.4790619. Kamble in an article also expresses concern about the impact of AI on human rights though not strictly with concern for border control but rather considers more generally the judicial system:  R M Kamble, Artificial intelligence and human rights, Uniform Law Review, 2024;, unae020, https://doi.org/10.1093/ulr/unae020

[122] Ibid at p. 27. AI use at frontiers was also the subject of a European Parliament overview: https://www.europarl.europa.eu/RegData/etudes/IDAN/2021/690706/EPRS_IDA(2021)690706_EN.pdf

[123] Wagner, Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems 11 POL’Y & INTERNET 104, 108–09 (2019) available at: https://onlinelibrary.wiley.com/doi/10.1002/poi3.198 at p 112.

[124] Ibid at 112 to 113.

[125] Ibid at 113.

[126] See Casey et al “Rethinking explainable machines: The GDPR’s “Right to Explanation Debate”” available at https://btlj.org/data/articles2019/34_1/04_Casey_Web.pdf. There are of course rights under GDPR in respect of automated decision making in Art 22. 

[127] Wagner, Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems 11 POL’Y & INTERNET 104, 108–09 (2019) available at: https://onlinelibrary.wiley.com/doi/10.1002/poi3.198 at p 113.

[128] https://curia.europa.eu/juris/document/document.jsf;jsessionid=51EC9748884288D83FB6FC47434551AC?text=&docid=280426&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=2836283

[129] Citing Art. 13: “(2) In addition to the information referred to in paragraph 1, the controller shall, at the time when personal data are obtained, provide the data subject with the following further information necessary to ensure fair and transparent processing: (…) the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.’ (Emphasis added)

at p. 10. 

[130] See below this chapter

[131] https://www.yalelawjournal.org/forum/the-ethics-and-challenges-of-legal-personhood-for-ai

[132] ibid

[133] Ibid

[134] ibid

[135] https://www.politico.eu/article/europe-divided-over-robot-ai-artificial-intelligence-personhood/

[136]https://www.europarl.europa.eu/RegData/etudes/STUD/2020/621926/IPOL_STU(2020)621926_EN.pdf at 2.1 et seq.

[137] Penguin 2022.

[138] https://www.lawreform.ie/_fileupload/consultation%20papers/wpAnimals.htm

[139] Ibid at p.67.

[140] Herbosch, Maarten. “Contracting with Artificial Intelligence: A Comparative Analysis of the Intent to Contract.” Rabels Zeitschrift Für Ausländisches Und Internationales Privatrecht / The Rabel Journal of Comparative and International Private Law, vol. 87, no. 4, 2023, pp. 672–706. JSTOR, https://www.jstor.org/stable/48762171. Accessed 2 June 2024 at p. 706.

[141] Cristina Frattone, Algorithmic mistakes in machine-made contracts: the legal consequences of errors in automated contract formation, Uniform Law Review, 2024;, unae004, https://doi.org/10.1093/ulr/unae004

[142] Citing Walter Auwers, Der Rechtsschutz der automatischen Wage nach gemeinem Recht (Kästner 1891); Fritz Gunther, Das Automatenrecht (1892); Karl Schels, Der strafrechtliche Schutz des Automaten (Roeder 1897); Fritz Schiller, Rechtsverhältnisse des Automaten (Müller & Werder 1898); Paul Ertel, Der Automatenmissbrauch und seine Charakterisierung als Delikt nach dem Reichsstrafgesetzbuche (Pilz 1898); Hartwig Neumond, ‘Der Automat. Ein Beitrag zur Lehre übre die Vertragsofferte’ (1899) 89 AcP 166; Mario Ricca-Barberis, ‘Dell’offerta fatta al pubblico e del contratto stipulato coll’automate’ (1901) 41 La Legge 356; Antonio Cicu, ‘Gli automi nel diritto privato’ (1901) 21 Il Filangieri 561; Antonio Scialoja, L’offerta a persona indeterminata ed il contratto concluso mediante automatico (San Lapi 1902); Cristina Frattone, Algorithmic mistakes in machine-made contracts: the legal consequences of errors in automated contract formation, Uniform Law Review, 2024;, unae004, https://doi.org/10.1093/ulr/unae004 at p. 1.

[143] Citing UNGA, ‘Developing New Provisions to Address Legal Issues Related to Automated Contracting’ (12

September 2022) UN Doc A/CN.9/WG.IV/WP.177 [11]–[13]; ‘Provisions of UNCITRAL Texts Applicable to

Automated Contracting’ (12 September 2022) UN Doc A/CN.9/WG.IV/WP.176 [39]; UNGA, ‘Report of the

United Nations Commission on International Trade Law Fifty-Fifth Session (27 June–15 July 2022)’ UN Doc A/

77/17 [159]; ‘Advancing Work on Automated Contracting’ (1 February 2023) UN Doc A/CN.9/WG.IV/WP.179

[50]; UNGA, ‘Draft Provisions on Automated Contracting’ (14 August 2023) UN Doc A/CN.9/WG.IV/WP.182

[34]–[46]; Cristina Frattone, Algorithmic mistakes in machine-made contracts: the legal consequences of errors in automated contract formation, Uniform Law Review, 2024;, unae004, https://doi.org/10.1093/ulr/unae004 at p. 18.

[144] Quoine Pte Ltd v B2C2 Ltd [2020] SGCA(I) 2.

[145] Cristina Frattone, Algorithmic mistakes in machine-made contracts: the legal consequences of errors in automated contract formation, Uniform Law Review, 2024;, unae004, https://doi.org/10.1093/ulr/unae004 at p. 18 – 19.

[146] See Chapter 9

Chapter 6

Superintelligence

“Our civilization looks to be approaching a critical juncture, given the impending development of superintelligence. This means that at some point, somebody, or all of us, might be confronted with choices about what kind of future we want..”[1]

Introduction

This chapter looks at the concept of superintelligence: loosely defined as machine intelligence greater than that of a human being. Super intelligence, as mentioned in our introduction, is the stated aim of many individuals across the AI community. There are some that hold the view we have already achieved superintelligence or that we are right on the cusp of achieving it.[2] But the concept is not without controversy. As already mentioned, in the introduction, a breakdown in philosophical viewpoint between Boomers, those that want to see AI utilised to the full, to advance our progress, and Doomers, those that see advanced forms of AI as potentially leading to catastrophic consequences, has impacted on the methodological position taken by some entities in the AI space and has forced Government’s to weigh in the balance the innovative rewards from allowing AI to thrive against the risk to safety that could ensue. Nor is it an easy exercise. There are those that hold the view that such a balancing exercise is simply not viable: AI, they say, at its most advanced level, could not be contained and consequently any restrictions we set down for its operation would be inoperable because AI would be too intelligent to be constrained by human-made rules designed to contain it. This sentiment was aired most recently in September 2024 when a group of influential scientists said there A.I. technology could, within a matter of years, overtake the capabilities of its makers.[3]

While the actual methodology required to be employed to achieve superintelligence is beyond the scope of this book it’s a process that requires vast amounts of data, computational power, and energy.[4] Already the frontier models available on the market – like GPT 4 (the model that  ultimately replaced ChatGPT in March 2023 and was itself updated to GPT-4o) – require 300 tons of CO2 for training.[5] Most of this is related to the vast amount of data the model must consume. Speaking at Davos, Switzerland in January 2024, OpenAI co-founder and CEO Sam Altmann indicated that an energy breakthrough would be required[6] before we could achieve human-level AI but that, still, AGI could be developed in the “reasonably close-ish future”.[7] Nuclear fusion was mentioned as one possible route.[8]   

This chapter will begin by looking at some recent developments with regard to certain computer chips – those dedicated to specifically advance Artificial Intelligence – and to developments in computational power more generally. It will then move on to consider the types of superintelligence we can expect to encounter. The chapter will then consider the issue of take-off speed, simply put, the time that humans would get to adjust to the development of a superintelligence – ranging from minutes to centuries. It will look in closer detail at one of those scenarios – a FOOM event – before moving to consider the types of superintelligence we can expect to encounter in our lifetime: collective, speed, quality. It will also look at the concept of friendly-AI and alignment. However, before we begin to look at superintelligence, and the market conditions required to achieve it, we will begin with a more straightforward proposition – what is intelligence. The backdrop to these predictions comes from the State of California which has recently proposed a law which would adopt a mandatory “kill switch” for high-end AI models:[9] a move which has been criticised.[10] Some even went so far as arguing that the Bill risked creating an environment where companies refused to share their under lying source code owing to the threat of legal action from the state.[11] Following lobbying by tech companies the legislature watered down the Bill:

“The bill would no longer create a new agency for A.I. safety, instead shifting regulatory duties to the existing California Government Operations Agency. And companies would be liable for violating the law only if their technologies caused real harm or imminent dangers to public safety. Previously, the bill allowed for companies to be punished for failing to adhere to safety regulations even if no harm had yet occurred.”[12]

Open AI were among the tech companies to voice opposition to the Bill in the wake of the amendments.[13] In spite of the criticism the Bill passed the legislature leaving the matter of whether it should be enacted in the hands of the governor of the State of California.[14] Opponents cited competition concerns in the lead up to the decision.[15]Ultimately the governor vetoed the legislation stating the bill was “flawed”.[16]

Intelligence

Max Tegmark recalls a story when he had the good fortune to attend a symposium at the Swedish Nobel Foundation. During his attendance a discussion arose on the definition for intelligence. He recalls that the experts present argued at length without reaching consensus, something he found funny, the intelligent intelligence experts failing to find agreement on how to define intelligence! 

In light of this he proposes there is no undisputed “correct” definition of intelligence. There are, however, many competing ones including capacity for logic, understanding, planning, emotional knowledge, self-awareness, creativity, problem-solving and learning[17] before he settled on something quite simple: the ability to accomplish complex goals.[18]

Consequently, as there are many possible goals, there are many types of intelligence. He gives the apt example of comparing one computer programme that can play chess and another one that can play Go. He asks which of the two is the most intelligent before reasoning that they are both intelligent as they can each do different things well. However, by this reasoning, he states, a third programme that can play both chess and Go would be more intelligent than the first two – if it’s at least as good as the other two at accomplishing all goals and better at at least one of the games. 

Ability, he says, comes on a spectrum, and isn’t, necessarily, an all-or-nothing trait. But, it can be defined in terms of narrow and broad intelligence. Again, the computer accomplished at playing chess, can only play chess – so it is narrow. A little further along the scale the robot showcased out on stage by Elon Musk in 2022[19] called Optimus, which could walk for the first time without a tether, and wave to the audience, is an example of a less narrow intelligence – though still falling into the narrow category. Human intelligence, by contrast, falls into the broad intelligence bracket. 

Tegmark says that artificial intelligence researchers consider that intelligence is ultimately all about information and computation – and they see no reason why machine intelligence cannot reach the same levels as a human.[20] “The term “artificial intelligence” or “AI” has been defined as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.  Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”[21] So, with that in mind, this chapter will next turn to its title – superintelligence. But, before that, we should consider market conditions which optimise industries move towards Artificial General Intelligence (AGI)

Nvidia – restrictions and developments

In October 2023 the Government of the United States of America announced a restriction of certain types of high-powered artificial intelligence computer chips into China including chips from manufacturer Nvidia – its A800 and H800.[22] The announcement was designed to close loopholes since an earlier move to restrict another chip (H100) had not been entirely satisfactory and it was subsequently reported that the latest prohibition too had limited effect[23] with reports of stockpiling[24] and a thriving underground marketplace of smugglers.[25] A further series of restrictions followed in 2025.[26] Shortly thereafter the market for high-end chips corrected sharply when news of a breakthrough by Chinese startup DeepSeek were released. That company had achieved comparable capability to top-end models already in the market but by using, mainly, less advanced chips and far fewer top-end chips – about 2,000 Nvidia chips against 16,000 for then existing models.[27]

The H100, the chip of choice of OpenAI, was already at the high-end of the market in terms of chip design, but, faced with curbs on its importation into China companies there had instead resorted to the slightly slowed down version called the A800 and H800. The second series of restrictions were intended to have a dampening effect on Artificial Intelligence development in China. But what was the reason for putting the block in place?

CNBC said the move was to prevent advances in China – especially with regard to Artificial Intelligence uses with military application:

The goal of the U.S. restrictions is to prevent Chinese access to advanced semiconductors that could fuel breakthroughs in artificial intelligence, especially with military uses, U.S. Commerce Secretary Gina Raimondo said on a call with reporters. They’re not intended to hurt Chinese economic growth, U.S. officials said. “The updates are specifically designed to control access to computing power, which will significantly slow the PRC’s development of next-generation frontier model, and could be leveraged in ways that threaten the U.S. and our allies, especially because they could be used for military uses and modernization,” Raimondo said.[28]

The upshot of the developments were crystal clear: not only did growth and development in Artificial Intelligence technology in China concern the US Government it also had potential military application.[29] Putting two and two together it was clear the world was in the midst of a race to attain  AGI – possibly much in the same way as the world, in the wake of commencement of hostilities in WWII, was in a race to develop a weapon that would end the war – the stakes couldn’t be higher.

In March 2024 Nvidia announced the release of its Blackwell Platform, heralding a “new era of computing”[30] Named in honour of David Harold Blackwell — a mathematician who specialized in game theory and statistics, and the first Black scholar inducted into the National Academy of Sciences in the USA, the system, said the company, would enable organizations everywhere to build and run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than its predecessor.[31] Described as the “world’s most powerful chip” the GB200 Grace Blackwell Superchip at the heart of the new platform would deliver 1.4 exaSs of AI performance and 30TB of fast memory. The numbers were staggering: could this be the platform to push closer towards Artificial General Intelligence?

Pioneering author Nick Bostrom published a seminal description of the drive for Artificial Intelligence. In his influential book Bostrom makes a distinction between intelligence which is equal to that of a human and what he describes as “strong superintelligence” which, he says, is an intelligence vastly greater than humanities combined intellectual wherewithal.[32] This intelligence he describes as superintelligence and it’s the title of his book. 

Bostrom indicates the biological limitations of the human brain. Biological neurons operate at a peak speed of about 200Hz, which, he says, falls a full seven orders of magnitude slower than a microprocessor.[33] As a consequence the human brain is forced to rely on massive parallelisation and is “incapable of rapidly performing any computation that requires a large number of sequential operations”.[34] There were further limitations: Axons carry action potentials at speeds of 120m/s or less whereas electronic processing cores can communicate optically at speeds of 300,000,000 m/s. The relative sluggishness of the human brain consequently limited how big it could be while functioning as a single processing unit. The human brain has fewer than 100 billion neurons, while, in contrast, computer hardware is “scalable up to very high physical limits”[35] The human brain is capable of holding four or five chunks of information at a given time while hardware advantages of digital intelligence would make them capable of having much larger memories – even in circumstances where it is difficult to compare the memory of a human brain with the Random Access Memory (RAM) of a digital computer. Finally, a machine may have other advantages over a human brain including with respect to sensors (data flow could be increased by adding millions of sensors), reliability, and lifespan. 

At the time of writing (2014) Bostrom considered the computational power of the human brain compared favourably to a digital computer – but the computer is developing fast and few consider it unfeasible a machine is capable of surpassing a human. Digital minds will have other advantages: they will be editable, will permit duplication, will achieve goal coordination, thus increasing efficiency in attaining goals, and will permit memory sharing, simply by swapping files. Ultimately, the attainable advantages of machine intelligence, in other words, the capability of machine intelligence into the future at its current rate if development, were described by the author as “enormous”.[36]

With this in mind, and with the dawn of a new super-power platform for supercomputing announced by Nvidia, and with other chip manufacturers, such as AMD and Intel,[37] also in the race, it is clear that humanity is moving closer and closer to achieving the OpenAI goal of creating “autonomous systems that surpass humans in most economically valuable tasks”.[38]

Software

But is there another way? One option considered by Bostrom was whether the same end-result could be achieved, not with general machine intelligence, but with software. In other words the result might be the same but the method of achieving it would be pre-defined by its designer and would not rely on a superintelligence thinking for itself. This, he neatly describes, is to build AI as a “tool and not an agent”. As a tool, the application of software would not pose challenges to near the same extent as an autonomous agent capable of having a will of its own – it doesn’t, for instance, pose an existential threat to humanity.[39]

The software hypothesis seems valuable, if it is possible to mathematically code for the specific outcomes its designers intended. But, even if it were possible to programme for each such outcome, the software would still be limited by the intentions of its designers. Sure, software might not necessarily always perform in the way it was intended, but, there is no comparison between that scenario and the creation of a general intelligence with a will of its own. Further, as Bostrom says, “the range and diversity of tasks that a general intelligence could profitable perform in a modern economy is enormous”.[40] It wouldn’t be sustainable to contemplate designing special purpose software to perform all of those tasks. Furthermore, imbued with intelligence, the general intelligence could perform tasks not contemplated by its designers, through its own ability to learn, reason, and plan. The software solution, though undeniably safer than AGI, has its limitations  – and is unlikely to appeal to Boomers. 

Take-Off Speed

Another issue that arises, dealt with by Bostrom, is the speed of take-off from human level intelligence to superintelligence. The author considers three possible take-off speeds: slow, moderate, and fast.

A slow take-off is described[41] as evolving over a longer period of time – decades or centuries. This type of take-off is described as presenting “excellent opportunities for human political processes to adapt and respond”. Among those processes was mentioned an AI arms race developing over a long temporal interval which would give Governments time to negotiate treaties and design enforcement mechanisms. A slow take-off would also give ordinary workers the opportunity to vocalise their concerns, if they feel they are being disadvantaged by the developments, and will give Governments the time to address those concerns.

A moderate take-off is described[42] as one that takes place over an intermediate interval, such as several months, or even years. Such a take-off will give humans some chance to adjust to the development of AGI, but not near as much opportunity to allay concerns and build enforcement mechanisms as a slow take-off. There would not be enough time to build new systems such as political systems, surveillance systems, or computer network security protocols, but there would be the opportunity to use existing systems to the new challenge presented.

A fast take-off is defined by Bostrom as one that can arise in a matter of minutes. Such a scenario would offer humans virtually no time to deliberate consequences and the fallout could be immediate. A fast take-off would be totally reliant on systems already in place. Bostrom even says, apocalyptically, “Nobody need even notice anything unusual before the game is already lost.”[43]

Some experts already view a fast take-off scenario, or, at least, a scenario where humans lose control, as something to be actively concerned about. The Future of Life Institute, a non-profit organisation which advocates safe-use of technology and preservation of life, has published multiple times expressing its concern at the potential catastrophic outcomes potentially associated with AGI. One of the founders of the Institute, Max Tegmark, Professor at MIT, and author of Life 3.0, describes breaking down in tears after a visit to the London Science Museum. The author was moved by the collections testimony to human innovation and disturbed by the “poetically tragic” prospect of technological progress leading to the development of an artificial intelligence that would make humanity obsolete”.[44] He referred to it in the conclusion in Life 3.0 when he said:

“What had triggered my London tears was a feeling of inevitability: that a disturbing future may be coming and there was nothing we could do about it. But the next three years dissolved my fatalistic gloom. If even a ragtag bunch of unpaid volunteers could make a positive difference for what’s arguably the most important conversation of our time, then imagine what we can all do if we work together!”[45]

FOOM

A fast take-off speed has been described as a FOOM event.[46] Essentially, it’s a sudden spike in the intelligence of an AI such that it achieves intelligence greater than, potentially, far greater than, a human. It’s been the subject of debate within AI circles for many years and it’s one of the main concerns of Doomers. A FOOM event could potentially disengage any security protocols we had in place to contain it – as we can expect that a superintelligence would know how to circumvent practically any restriction a human can set down for it. If limited to minutes, as the fast-take-off scenario hypothesises, it would allow virtually zero time for humans to adjust. We would be in the situation where overnight a superintelligence had been created – and our world has changed. 

It’s worth pointing out, thankfully, that this event is not a certainty – in fact the probability of it occurring is quite low, overall: while one commentator put the risk at greater than 10 per cent, another put the equivalent risk at less than 1 per cent.[47] One of the reasons for the difference in figures depends on the background of the individual researcher. So, while one commentator might have a background in physics, philosophy and computer research another might reason the issue out from the perspective of abstractions based on economic growth.[48] It all depends on perspective and skill-set and AI brings together people from various backgrounds – as it’s about the creation of intelligence. 

One important commentator in this space, Eliezer Yudkowsky, a committed Doomer, predicts disaster from AI: that humanity faces likely extinction in the near future (years or decades) from a rogue unaligned superintelligent AI system.[49]

He puts the matter thus:

“So if you build a mind smarter than you, and it thinks about how to go FOOM quickly, and it goes FOOM faster than you imagined possible, you really have no right to complain—based on the history of mere human history, you should have expected a significant probability of being surprised.”[50]

Unsurprisingly, FOOM events are the refrain of Doomers – those that predict apocalyptic outcomes in future. But not everyone agrees with this perspective: at least one reviewer has coined the phrase “AI Fatalism” to describe “the belief, sadly common in tech circles, that A.I. is part of an inevitable future whose course we are powerless to change.”[51]The next section will counteract this one somewhat and will look at the concept of friendly-AI, which many see as a more likely outcome.

As mentioned earlier, AI has transformative potential, mostly for good – greater king’s spee,[52] greater efficiency, greater insight, but, sadly, the potential is also there for destruction, or losing control. A FOOM event is, in many respects, a worse-case scenario for this reason as it cedes control to another entity and does so very quickly – potentially in minutes. 

Another issue raised is to predict the behaviour of a superintelligence upon its inception. Yudkowsky, quite brilliantly, looks at two obvious options open to it: to share and trade with the world around it, or, to take over the universe. He defines the first as having, say, 10 utilons, and the second as having, comparatively speaking, 1,000 utilons – suggesting that it’s a virtual certainty that the superintelligence would take the second option. This is how he puts it:

Do you agree that if an unFriendly AI gets nanotech and no one else has nanotech, it will take over the world rather than trade with it? Or is this statement something that is true but forbidden to speak?[53]

The truth is, faced with this option there is no indicator in advance how a superintelligence would react – as we have never before encountered a man-made superintelligence. For those with strong belief and faith in God, the outcome, of encountering God, would be more predictable as we see Him as benevolent – and have already been bestowed with a guide: the Bible. The same cannot be true of AI. Even if the AI did choose to trade with us it would still be on its terms, as, at that point, we would have lost control of the game.[54]

Another issue, as if we didn’t have enough already, is whether the superintelligence is singleton – exists on its own as the only superintelligence in our Universe[55]– or whether there are multiple superintelligences each vying with the other over outcomes. Bostrom coined the phrase and he defines singleton as “a world order in which there is a single decision-making agency at the highest level.”[56] Suffice it to say, faced with a choice between the two options, the singleton option seems safest, as it’s far easier to deal with one entity than with several: several entities raises the risk that at least one of them would choose option 2 above and seek to take-over. We have a better probability of co-existing with one entity rather than face what has been described as “burning the cosmic commons” as different entities bid for control with each other.[57] Humans would not fare very well in such a scenario.   

Alignment

We move from the domain of Doomers to the goal of Boomers – friendly AI. This coins the belief that, insofar as we achieve AGI – we will be able to coral it and live peaceably alongside it. After the doom and gloom of the previous section it’s time to be more optimistic. On that note, one author, refers to the tremendous work that is now being undertaken to address some of the concerns already mentioned in the previous section.

“There is every reason for concern, but our ultimate conclusions need not be grim. (…) the outbreak of concern for both ethical and safety issues in machine learning has created a groundswell of activity. Money is being raised, taboos are being broken, marginal issues are becoming central, institutions are taking root, and, most importantly, a thoughtful engaged community is developing and getting to work. The fire alarms have been pulled, and first responders are on the scene.”[58]  

The concept of friendly-AI is also known as AI alignment and it pivots on the view that we can both predict its future behaviour and ensure that it acts in the way predicted. While friendly-AI points to a slow take-off, it isn’t necessarily that way. It depends on the measures we put in place before inception and also, potentially, on the methodology utilised to achieve AGI in the first place.  

It is undeniable that a friendly AI would yield enormous upside benefit to humanity. We could solve problems we are not in a position to solve: interplanetary space travel, the end of all war, repairing the ozone layer, providing cures and treatment for diseases, the discovery of new chemical elements, and many, many more potential upside outcomes. For many Boomers the positive impact AGI would have on our society is so impactful that we should strive to get there – and this is what drives many of them onwards to push for its creation. 

At the heart of this view is the importance of adumbration of human values in the AGI. It is essential that those in the business of creation of a superintelligence ensure, insofar as possible, that an AGI would co-exist with us, would be kind, and would learn our values to each other. While not wishing to digress into science fiction there is an interesting set of rules written by author Asimov, referred to earlier, which, he states, should apply to a robot:

“A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”[59]

The issue of the instillation of a set of rules like these, aside from the issue of how such would be achieved, is laudable and has merit. Christian looks in some detail at how such instillation might take place and refers in his book The Alignment Problem[60] to the study of reinforcement of rules where correct action would result in a reward.[61] At its most basic reinforcement training is one conducted through trial and error[62] and most of the time the agent under review will take the action that results in the greatest total reward.[63]  This occurs some 99% of the time, however, the other 1% of the time sees the agent try something completely different, just to see what might happen.[64]

There are other occasions when the reward is simply not foreseeable. Christian refers to the attempts of B.F. Skinner, a psychologist, behaviorist, inventor, and social philosopher, to reward a small bird for bowling a tiny bowling ball down a miniature bowling alley. 

“The bird, clueless about what game it had been put into, might take years to happen upon the right behaviour – of course it (and Skinner) would have died of hunger long before then.”[65]

These behavioural experiments form the backdrop for how Christian sees the issue of alignment playing out. An AI would need to be trained and a reward structure might be the way to train it. Either way our training methodology would require explicit formal metrics. As the author puts it:

“Increasingly institutional decision-making relies on explicit, formal metrics. Increasingly, our interaction with almost any system invokes a formal model of behaviour – either a model of user behaviour in general or one, however simple, tailored to us. What we have seen (…) is the power of these models, the ways they go wrong, and the ways we are trying to align them with our interests.”[66]

The opportunities involved, rather like the plot from the film Arrival, are many, and startling. There are many researchers that would jump at the opportunity to teach a new mind how to think and to learn from it in turn. As Christian says:

“[T]he prospect of so-called AGI – one or more entities as flexibly intelligent as ourselves (and likely more so) – will give us the ultimate look in the mirror. Having learned perhaps all too little from our fellow animals, we will discover first hand which aspects of intelligence appear to be universal and which are simply human. This alone is a terrifying and thrilling prospect. But we are better knowing the truth than imagining it.”[67]

Still, it’s worth considering the views of leadership within the tech community itself. Over 1,000 tech leaders, researchers and others signed an open letter in March 2023[68] urging artificial intelligence labs to pause development of the most advanced systems warning that such tools present “profound risks to society and humanity.” 

“AI developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict or reliably control”[69]

Signees included Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock.[70]

In an even more incredible development the leaders of three leading AI companies: OpenAI, Google Deepmind, and Anthropic signed a different letter a couple of months later[71] which warned that “mitigating the risk of extinction from AI should be a global priority alongside other societal risks, such as pandemics and nuclear war”.[72] The New York Times aptly summarises the proceedings:

“These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building – and, in many cases, are furiously racing to build faster than their competitors – poses grave risks and should be regulated more tightly.[73]

Certainly, the views of OpenAI are interesting, and it’s noted their views expressed elsewhere[74] that AI will “need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc”. It was reported in June 2024 by the Financial Times that that company had increased the number of staff on its global affairs team from three at the start of 2023 to 35 in 2024 – primarily with the aim of influencing regulation.[75]

Types of Superintelligence

It’s difficult to quantify how significant the book Superintelligence was upon its release. While the idea of creating a machine intelligence had been around for a long time, as early as the 1950s,[76] and had been paraded numerously as part of various plots in science fiction novels,[77] it wasn’t until the publication of Nick Bostrom’s seminal text that realistic concepts, and scenarios, were crystalised. One of the matters, among many, considered by that author were the different types of superintelligence[78] we can expect to encounter, possibly in our own lifetime, as research and development work pushes towards creation of AGI and it becomes more and more likely with each passing day that we will come face-to-face with it.

Collective Superintelligence

This is described as aggregating numerous smaller intelligences. Let’s take an example: imagine that you encountered a super intelligence entity, maybe in-person on the street, or in a Government office, or online, and you have a conversation with that entity. Imagine if that conversation was replayed for all of the other AGI entities in the world so that each time you interacted with a different AGI it had perfect recall of every other conversation you’ve ever had with another AGI entity. Could you imagine how this would shape, or re-shape, your interaction with the world around you? The consequences would be extraordinary. The idea of collective intelligence is something that’s already been contemplated in different spaces. One example is in the field of autonomous, driverless, car technology, where the scope of the work being done in the testing phase embraces the idea that as these driverless cars advance, and encounter new scenarios, they then each pass along that information to every other driverless car connected to them. This creates an extraordinarily efficient system of learning compared to the rote learning required of a human being which is dependent on personally encountering different situations and learning from each of them – some research is even suggesting that autonomous vehicles are, overall, safer than human-driven ones.[79] Still, the difficulties driverless cars, or those driving in autopilot mode, have had in testing on public roads is testament to the current limitations of machine learning – autonomous cars are capable of aberrations: like when one stopped randomly in a tunnel causing a pile-up behind it,[80] or when one drove through a building site,[81] or when one failed to recognise markings on a motorway.[82] The National Highway Traffic Safety Administration (USA) released results of a two year investigation that analysed 1,000 Tesla crashes while vehicles had Autopilot engaged and found the system “can give drivers a false sense of security.”[83]Tesla recalled nearly 2 million vehicles as a result.[84] A case opened against Tesla for wrongful death by a family of an individual killed when autopilot was deployed.[85] However all cases brought against Tesla regarding its Autopilot have been either settled or dismissed – according to The New York Times.[1] This changed however in July 2025 when the first jury hearing against Tesla took place in Federal Court in Miami. The case concerned the crash of a Model S vehicle in 2019 in circumstances where the driver lost attention to retrieve a mobile phone and, it was alleged, Autopilot crashed into another vehicle consequently killing one pedestrian and injuring another. The family of the pedestrian killed in the crash,  Naibel Benavides, brought the action. Her boyfriend was also named as a party in proceedings – he survived with grave injuries. In its defence Tesla claimed the driver retained his foot on the accelerator, while he attempted to retrieve the phone, and that this, in effect, overrode an element of the Autopilot system (cruise control). It was also claimed that the Autopilot system wasn’t effectively driving the vehicle at the time as there had not been a handover between the driver and the vehicle; and that the driver should still have been paying attention. 

“The plaintiffs are expected to argue, the court documents show, that Autopilot is supposed to ensure that drivers remain attentive but failed to do so in this case. They are likely to also focus on the car’s automatic emergency braking system, which is supposed to activate even if part of Autopilot is overridden.”[2]

In the result the jury found Tesla partially liable and made a total award of damages of $243m.[1] Tesla has already indicated it intends to appeal. 


[1] https://www.ft.com/content/79ddb696-b1f6-4647-94b2-35c7a49e0728


[1] https://www.nytimes.com/2025/07/14/business/tesla-trial-autopilot.html#:~:text=The%20case%20stems%20from%20a,had%20been%20settled%20or%20dismissed.

[2] https://www.nytimes.com/2025/07/14/business/tesla-trial-autopilot.html#:~:text=The%20case%20stems%20from%20a,had%20been%20settled%20or%20dismissed.

In many ways driverless cars are a very useful barometer for us to grasp the limitations of software systems and there are those who hold the view that an autonomous global fleet of vehicles (ie where all cars are autonomous) will never happen.[86] With AGI, however, the sense is that things could well be different. A driverless car may well be entirely within the scope of the abilities of an AGI. And its advanced collective intelligence will ensure that lessons are learned extremely quickly. In any event driverless taxis are already making advances on city streets with reports that billions of dollars is planned to be invested in the technology.[87]

Speed Superintelligence

This is described as intelligence just like that of a human mind – only faster. Faster is described as “multiple orders of magnitude” faster. Bostrom gives the example of an emulation operating at ten thousand times the speed of a human brain which could “read a book in a few seconds”[88] and “write a PhD thesis in an afternoon”.[89]   Incredibly, with a speedup factor of 1 million, an emulation could accomplish 1,000 years of human intellectual endeavour in a single day.[90]  

Quality Superintelligence

Bostrom also makes reference to quality superintelligence. Human beings are fallible: everyone makes mistakes and sometimes the most intelligent people make the simplest errors of all. We all know this from our day-to-day dealings with other people. Yet, imagine a superintelligence that never erred, was virtually infallible. Bostrom recognises that this type of intelligence is “murky” in that we simply don’t have experience of fallibility beyond the higher echelons of human capability. Consequently it’s difficult to say, without seeing an AGI in action, exactly what its limitations – if any – would be. 

Conclusion

This chapter has been, at times, heavy, with discussion of issues until recently described as taboo, and with a series of open-ended possibilities about the world that awaits us. If current predictions are true we should expect to see AGI by 2040[91] or so. Some, put the number much sooner, before 2030,[92] but it’s likely that we’ll need breakthroughs first in concurrent industries like energy before we will be in position to achieve AGI. Of course, as already mentioned, there are numerous routes researchers can take to achieve AGI, including emulation, and coding. We have looked at the software option and considered how it might be possible, certainly safer, but that the ultimate outcome would be more limited. 

The chapter has also considered the different types of take-off speeds for the development of AGI – whether slow, moderate, or fast. The differences between these – ranging from centuries to minutes – is quite simply enormous. What is clear is that a realisation has dawned to all across this space that there is the potential for harm. While some will take the view that this should result in a cessation the markets simply don’t operate like that; nor does the Hirshima Process contemplate this when it states unequivocally that members should “prioritise the development of advanced AI systems”.[93] Private entities that have enjoyed success from their growth and development are extremely unlikely to pull back. For Government’s too it presents a dilemma. Forcing industry, and indeed Government itself behind closed doors, to a cessation, might be operable, but it wouldn’t be an international standard. What of those that enjoy a global hegemony, they are highly unlikely to pull back and watch some other nation take control – insofar as the result can be contained. This appears to be the position of the United States of America when it issued an executive order to mandate it being informed of critical developments in the area.[94] Nobody, at least nobody without nefarious intent, would wish to wake up in the morning to discover that the world has changed.  This was the position at least – until President Trump rescinded the President Biden Executive Order and placed innovation, instead, at the forefront of the issue.[95]

Contrariwise there are numerous grounds for optimism. AI is potentially life-changing for many and this chapter has looked briefly at some of the innovations we might expect from it in its advanced state. It’s a wish list of items that humans are striving for – and AI may well be the means to get there. Principal among these is the issue of interplanetary travel. Ultimately, we know from physics that the earth will not survive forever. Aside from present day issues of real concern such as climate change we are confronted with the inevitable reality that one day our home here will no longer exist. It isn’t a problem for me and you, it’s a long way in the distance, but it is a problem for mankind. Ultimately, our survival will be dependent on getting off this planet, and AI may well provide the means to achieve this. 

In the meantime it’s worth emphasising there are lots of researchers working in this industry that both want to achieve AGI and want to achieve it safely – or that are actively asking for guardrails to be put in place. The issues, for a while taboo, are now, very much, out in the open. People across the industry are aware of both what’s at stake and what the potential dangers are. In large part this is due to the sterling work of organisations like the Future of Life Institute that have been vocal and ever-present in raising awareness around those issues. 

But, it’s undeniable that more measures are needed. Writing for the Financial Times one writer said this:

“Some of the risks of AI, such as a Terminator-style future in which machines decide humans have had their day, are well-trodden territory in science fiction – which, it should be noted, has had a pretty good record of predicting where science itself will go (…) For now, in lieu of either outlawing AI or having some perfect method of regulation, we might start by forcing companies to reveal what experiments they are doing, what’s worked, what hasn’t and where unintended consequences might be emerging. Transparency is the first step towards ensuring that AI doesn’t get the better of its makers.”[96]

Even those leading the charge in industry have paused and called for regulation. In 2023 leaders from OpenAI, Google DeepMind, Anthropic and other A.I. labs warned that future systems could be as deadly as pandemics and nuclear weapons.[97] A one page statement simply read:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”[98]

One report concludes that the risks are manageable – provided there is early intervention from law-makers: “the features of catastrophic risks related to AI are varied and complex, but largely manageable—as long as policy-makers pay sufficient attention to all the dimensions of safety as AI systems progress.”[99] Yet, there are some who feel, contrariwise, that AI won’t advance much further than its current state. The Financial Times ran a piece in April 2024 that quoted one commentator as saying: “We may be at peak AI.”[100] That article stated AI was “sucking up cash, electricity, water, copyrighted data. It is not sustainable.”[101] One source makes it clear, however, that we can expect AGI to be deployed across a wide-range of initiatives including governance:

“If AGI is better than most humans at all cognitive tasks, it is very likely to be better than humans at the numerous tasks of governing—that is, designing, implementing, and enforcing the rules by which a community or institution operates. This will create a compelling incentive to invest AGI with governing power at all levels of society, from clubs, schools, and workplaces to the administrative agencies that regulate and help steward the economy, labour, the environment, transport, health care, and even provide for public safety, criminal justice, and election administration. If in fact AGI is much better at executing the tasks that we give it than humans (as its would-be creators intend), there will be a strong, perhaps irresistible temptation to have it identify and select which tasks to pursue, then to have it set our priorities, not just make and enforce our rules in particular domains.”[102]

The last word can be left to the EU European Economic and Social Committee: 

“Finally, the question arises as to the possibilities and risks associated with the development of superintelligence. According to Stephen Hawking, the development of general AI may spell the end for mankind. Hawking predicts that, at the moment, AI will continue to evolve at a speed people cannot keep pace with. As a result, there are experts who opt for a ‘kill switch’ or reset-button, which we can use to deactivate or reset an out-of-control or superintelligent AI system.”[103]


[1] Bostrom, Deep Utopia (Ideapress, 2024)

[2] https://economictimes.indiatimes.com/tech/tech-bytes/elon-musk-says-ai-will-be-smarter-than-any-human-next-year/articleshow/108463055.cms?from=mdr

[3] https://www.nytimes.com/2024/09/16/business/china-ai-safety.html

[4] https://www.popsci.com/technology/sam-altman-age-of-ai-will-require-an-energy-breakthrough/

[5] https://www.popsci.com/technology/sam-altman-age-of-ai-will-require-an-energy-breakthrough/

[6] https://www.popsci.com/technology/sam-altman-age-of-ai-will-require-an-energy-breakthrough/

[7] https://www.cnbc.com/2024/01/16/openais-sam-altman-agi-coming-but-is-less-impactful-than-we-think.html

[8] https://www.popsci.com/technology/sam-altman-age-of-ai-will-require-an-energy-breakthrough/

[9] https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047

[10] https://www.ft.com/content/0eb0527b-cca5-46ca-91e0-1240ce89ae3c also see https://www.nytimes.com/2024/08/14/technology/ai-california-bill-silicon-valley.html

[11] Ibid. 

[12] https://www.nytimes.com/2024/08/15/technology/california-ai-bill-amended.html

[13] https://www.ft.com/content/bdba5c71-d4fe-4d1f-b4ab-d964963375c6

[14] https://www.ft.com/content/352b9cdb-9ed9-4dc9-94f3-115cab988c21

[15] https://www.ft.com/content/c5e7bf0b-cadc-4673-b2d8-daa9d89f2230flega

[16] https://www.nytimes.com/2024/09/29/technology/california-ai-bill.html

[17] Tegmark, Life 3.0 (Penguin) 2017 at p. 49. 

[18] Ibid at 50.

[19] https://www.youtube.com/watch?v=g4xerDP75tU

[20] Tegmark, Life 3.0 (Penguin) 2017 at p. 55.

[21] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ section 3(b). See also (2024) 137 Harv L. Rev (5) 1525 to 1532 which refers to Executive Order 14,105 which restricts outbound US investment in semiconductors and microelectronics, quantum information technologies, and AI systems”. (Exec. Order 14,105 § 9(c))

[22] https://www.cnbc.com/2023/10/17/us-bans-export-of-more-ai-chips-including-nvidia-h800-to-china.html

[23] https://www.nytimes.com/2024/08/04/technology/china-ai-microchips-takeaways.html?searchResultPosition=2

[24] https://www.nytimes.com/2025/01/23/technology/deepseek-china-ai-chips.html

[25] https://www.nytimes.com/2024/08/04/technology/china-ai-microchips.html

[26] https://www.nytimes.com/2025/01/13/us/politics/biden-administration-rules-artificial-intelligence.html

[27] https://www.nytimes.com/2025/01/23/technology/deepseek-china-ai-chips.html

[28] https://www.cnbc.com/2023/10/17/us-bans-export-of-more-ai-chips-including-nvidia-h800-to-china.html

[29] One source points to the connection between superchip manufacturing and geopolitics: https://www.ft.com/content/a613d44c-6ea4-4689-a125-c4f1861bc22e

[30] https://nvidianews.nvidia.com/news/nvidia-blackwell-platform-arrives-to-power-a-new-era-of-computing

[31] https://nvidianews.nvidia.com/news/nvidia-blackwell-platform-arrives-to-power-a-new-era-of-computing

[32] Bostrom, Superintelligence, OUP (2014) at pp. 75-76.

[33] Bostrom, Superintelligence, OUP (2014) at p. 72. 

[34] Ibid. 

[35] Ibid.

[36] Bostrom, Superintelligence, OUP (2014) at p. 74.

[37] https://www.reuters.com/technology/intel-reveals-details-new-ai-chip-fight-nvidia-dominance-2024-04-09/

[38] https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

[39] Bostrom, Superintelligence, OUP (2014) at p. 185. 

[40] Bostrom, Superintelligence, OUP (2014) at p. 185.

[41] Bostrom, Superintelligence, OUP (2014) at p. 77.

[42] Bostrom, Superintelligence, OUP (2014) at p. 78.

[43] Bostrom, Superintelligence, OUP (2014) at p. 77.

[44] https://www.ft.com/content/31176c28-8bea-11e7-9084-d0c17942ba93

[45] Tegmark, Life 3.0, Penguin (2018) at p. 333.

[46] I. J. Good coined the phrase “intelligence explosion” which was based on the positive feedback of a smart mind making itself even smarter. Writing in 1965 he stated: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind… Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.” Good, I. J. (1965), Speculations Concerning the First Ultraintelligent Machine. https://exhibits.stanford.edu/feigenbaum/catalog/gz727rg3869

[47] https://intelligence.org/files/AIFoomDebate.pdf.

[48] https://intelligence.org/files/AIFoomDebate.pdf at pg 26.

[49] https://www.lesswrong.com/posts/Lwy7XKsDEEkjskZ77/contra-yudkowsky-on-ai-doom

[50] Eliezer Yudkowsky, https://intelligence.org/files/AIFoomDebate.pdf

[51] See https://www.nytimes.com/2021/11/21/books/review/the-age-of-ai-henry-kissinger-eric-schmidt-daniel-huttenlocher.html

[52] Although a recent paper casts doubt on the productivity gains we can expect from AI https://www.nytimes.com/2024/07/13/business/dealbook/ai-productivity.html

[53] https://intelligence.org/files/AIFoomDebate.pdf at p. 201. The answer to this question was framed by Robin Hanson as “I am not suggesting forbidding speaking of anything, and if “unfriendly AI” is defined as an AI who sees itself in a total war, then sure, it would take a total war strategy of fighting not trading.” https://intelligence.org/files/AIFoomDebate.pdf at p. 202.

[54] Bostrom, Superintelligence, OUP (2014) at p. 77.

[55] Aside from deity.

[56] https://nickbostrom.com/fut/singleton

[57] https://intelligence.org/files/AIFoomDebate.pdf

[58] Christian, The Alignment Problem, Atlantic (2020) at p. 327.

[59] https://www.britannica.com/topic/Three-Laws-of-Robotics

[60] Christian, The Alignment Problem, Atlantic (2020)

[61] Ibid at p. 153. 

[62] Ibid at p. 156. 

[63] Ibid. 

[64] Ibid.

[65] Christian, The Alignment Problem, Atlantic (2020) at p. 157.

[66] Christian, The Alignment Problem, Atlantic (2020) at p. 327.

[67] Christian, The Alignment Problem, Atlantic (2020) at p. 328.

[68] https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html

[69] https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html

[70] https://thebulletin.org/doomsday-clock/

[71] https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html

[72] One article published in The Irish Times highlights the use of current AI by the Israeli army to generate targets among that Gazan population. https://www.irishtimes.com/opinion/2024/04/13/mark-oconnell-the-machine-does-it-coldly-artificial-intelligence-is-already-killing-people/

[73] https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html

[74] https://openai.com/blog/governance-of-superintelligence

[75] https://www.ft.com/content/2bee634c-b8c4-459e-b80c-07a4e552322c

[76] https://www.britannica.com/technology/artificial-intelligence/Alan-Turing-and-the-beginning-of-AI

[77] Asimov, I, Robot is a good general example: https://www.britannica.com/topic/I-Robot

[78] Bostrom, Superintelligence, OUP (2014) chap. 3.

[79] https://www.theverge.com/2023/12/20/24006712/waymo-driverless-million-mile-safety-compare-human#

[80] https://www.businessinsider.com/tesla-stops-tunnel-pileup-accidents-driver-says-fsd-enabled-video-2023-1#:~:text=Surveillance%20video%20shows%20a%20Tesla,carmaker’s%20Full%20Self%2DDriving%20software.

[81] https://www.nytimes.com/2023/08/17/us/driverless-car-accident-sf.html

[82] https://www.wired.com/story/tesla-autopilot-self-driving-crash-california/

[83] https://edition.cnn.com/2024/04/08/tech/tesla-trial-wrongful-death-walter-huang/index.html?Date=20240408&Profile=CNN&utm_content=1712604604&utm_medium=social&utm_source=linkedin

[84] https://edition.cnn.com/2023/12/13/tech/tesla-recall-autopilot/index.html

[85] https://edition.cnn.com/2024/04/08/tech/tesla-trial-wrongful-death-walter-huang/index.html?Date=20240408&Profile=CNN&utm_content=1712604604&utm_medium=social&utm_source=linkedin

[86] https://www.theguardian.com/commentisfree/2023/dec/06/driverless-cars-future-vehicles-public-transport

[87] https://www.nytimes.com/2024/09/04/technology/waymo-expansion-alphabet.html

[88] Bostrom, Superintelligence, OUP (2014) at p. 64.

[89] Ibid.

[90] Ibid.

[91] https://intelligence.org/2013/05/15/when-will-ai-be-created/

[92] https://www.analyticsvidhya.com/blog/2024/03/agi-will-be-a-reality-in-five-years-says-nvidia-ceo-jensen-huang/

[93] https://www.mofa.go.jp/files/100573471.pdf

[94] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

[95] https://www.nytimes.com/2025/01/25/us/politics/trump-immigration-climate-dei-policies.html

[96] https://www.ft.com/content/3e27cfd6-e287-4b6f-a588-29b5b962a534

[97] https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html

[98] https://www.safe.ai/work/statement-on-ai-risk

[99] Drexel, Bill, and Caleb Withers. “Conclusion.” Catalyzing Crisis: A Primer on Artificial Intelligence, Catastrophes, and National Security. Center for a New American Security, 2024. http://www.jstor.org/stable/resrep60693.11 at p 27.

[100] Financial Times, AI keeps going wrong. What if it can’t be fixed? April 6th 2024. 

[101] Ibid.

[102] Lazar, Seth, and Alex Pascal. AGI and Democracy. Ash Institute for Democratic Governance and Innovation, 2023. JSTOR, http://www.jstor.org/stable/resrep59651. Accessed 2 June 2024 at p. 3.

[103] Opinion, European Economic and Social Committee, Opinion of the Committee on ‘Artificial Intelligence – The consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society, 2017/C288/01 available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52016IE5369&from=CS

Chapter 8

The United States of America Position on Artificial Intelligence

Introduction

This chapter will consider the moves to regulate Artificial Intelligence in the United States of America. World leader in Artificial Intelligence innovation the United States has been cautious to seek to balance the rights of workers, consumers, and privacy with the desire to continue to encourage innovation in this field. The chapter will first consider the blueprint for an AI Bill of Rights, a progressive, forward-looking document dated 2022. It will then consider one of two critical Executive Orders issued in this space in 2023 the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It will also look at some of the earliest State adoptions of regulation in this field including those in Colorado, Connecticut, Virginia, New York and California and will mention the Bill in the Senate on algorithmic accountability, as well as the proposed introduction of a “Kill Switch” in California to shut down unsafe AI if required.

Overview

The United States of America, world leader in Artificial Intelligence – with more AI start-ups in 2022 raising first-time capital in the United States[1] than in the next seven countries combined, has moved first into the regulatory space in 2020 with its executive order on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government[2] which stated, inter alia, that: “Agencies are encouraged to continue to use AI, when appropriate, to benefit the American people. The ongoing adoption and acceptance of AI will depend significantly on public trust.”[3]

In 2022 it issued its blueprint for an AI Bill of Rights[4] which was based on an initiative from 2021. That same year it also published its core principles to reform big tech platforms.[5] In 2023 two significant Executive Orders were made: the Executive Order directing agencies to combat algorithmic discrimination as part of Order to Strengthen Racial Equity and Support for Underserved Communities Across the Federal Government[6] and the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.[7] Several states in the United States of America have also embarked on a process of State regulation in this space including: Colorado, Connecticut, Virginia, New York and California. Colorado,[8] Connecticut[9] and Virginia[10] have each enacted privacy legislation governing automated decision making. Using similar language each give consumers the right to opt out of automated decisions in respect of financial or lending, housing, insurance, education enrolment, criminal justice, employment opportunities, health care services, or access to essential goods and services. In May 2024 Colorado became the first US State to pass a dedicated standalone piece of legislation to govern high risk AI systems; though the Bill has yet to be signed by the Governor and consequently has not yet entered into force.[11] The Bill among other matters addresses the issue of algorithmic discrimination. The Bill does not contain a private right of action but instead can only be enforced by the Attorney General’s office. If the bill is signed by the Governor and becomes law, it will not go into effect until February 1, 2026.[12] Utah likewise passed a Bill[13] on Artificial Intelligence which promotes limited obligations on private sector companies deploying generative Artificial Intelligence.

 In California the California Privacy Rights Act[14] requires adoption of regulation with respect to businesses’ use of automated decision making technology”. Furthermore, in that State, for several years there has been a prohibition on certain chatbots.[15] The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act[16] was passed in the California Senate in 2024.[17] Among other matters the Bill proposes to guarantee to a newly-created state body that developers will not develop models with “a hazardous capability”, [18]such as creating biological or nuclear weapons or aiding cyber security attacks.[19] The Bill opened up again a chasm between Doomers and Boomers with a group of researchers at Open AI blowing the whistle in the weeks following passing of the Bill citing a “reckless race for dominance” at that company[20] – this was in the face of criticism of the Bill by others in the industry.[21] The Senate Bill, among other matters, proposes obligatory reporting on safety testing and the introduction by developers of a so-called “kill-switch” to shut down AI models if required.[22] The Californian Bill was co-sponsored by the Center for AI Safety (CAIS).[23]

In New York a New York City law will prohibit employers and employment agencies from using automated decision making to screen city residents for employment decisions in certain instances.[24] The Illinois Artificial Intelligence Video Interview Act[25] has required employers to notify applicants when they use AI to vet videos job interviews.

Blueprint for an AI Bill of Rights[26]

Arising from an initiative the White House Office of Science and Technology Policy (OSTP) launched in the Autumn of 2021,[27] to develop “a bill of rights for an AI-powered world” the Blueprint for an AI Bill of Rights(2022) is motivated by concerns about potential harms from automated decision-making. It specifically refers in its opening to “the great challenges posed to democracy today” including those presented by technology, data and automated systems. It refers to systems used in patient care which have “proven unsafe, ineffective, or biased” as well as algorithms used in hiring and credit decisions which have been found to “reflect and reproduce existing unwanted inequities or embed harmful bias and discrimination”. On the other side, the blueprint says that “automated systems have brought about extraordinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm paths, to algorithms that can identify diseases in patients.” 

The important progress associated with the development of automated systems “must not come at the price of civil rights or democratic values.” The Blueprint outlines its core principles: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, human alternatives, consideration, and fallback. 

Importantly the blueprint does not set out substantive rights. It states:

“The Blueprint for an AI Bill of Rights is not intended to, and does not, create any legal right, benefit, or defense, substantive procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person, nor does it constitute a waiver of sovereign immunity.”[28]

On safe and effective systems the blueprint states that automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system.” This principle is considered important as “reliance on technology can also lead to situations to its use in situations where it has not yet been proven to work – either at all or within an acceptable range of error. 

Algorithmic discrimination protections should avoid discrimination when automated systems “contribute to unjustified different treatment or impacts disfavouring people based on their race, colour, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation, religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. The Blueprint says there is “extensive evidence showing that automated systems can produce inequitable outcomes and amplify existing inequity.” Data that that fails to take account of any prevailing existing biases can result in a range of consequences. The blueprint mentions facial recognition technology that can contribute to wrongful and discriminatory arrests, hiring algorithms that informs discriminatory decisions, and healthcare algorithms that discount the severity of certain diseases in Black Americans.

Data privacy extends to a protection from violations of privacy through design choices that ensure protection are included by default, including ensuring that data collection confirms to reasonable expectations and that only data strictly necessary for the specific context is collected. Data privacy is described as a “foundational and cross-cutting principle required for achieving all others in this framework.” The Blueprint refers to surveillance and data collection, sharing, use, and reuse” and says such “now sit at the foundation of business models across many industries”. 

Under the principle of notice and explanation designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear description of the overall system functioning and the role automation play, notice that such systems are in use, the individual or organisation responsible for the system, and explanations of outcomes that are clear, timely, and accessible.

On the principle on human alternatives, consideration, and fallback the Blueprint advocates that a person should be able to opt out from automated systems in favour of a human alternative, where appropriate where appropriateness is determined based on reasonable expectations in a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts.

Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence[29] (since rescinded[30])

On October 30, 2023 the White House issued details of an executive order signed by President Biden concerning the safe, secure, and trustworthy development of Artificial Intelligence technology. The Order was signed on the same week that a concerned UK Government organised an international summit on Artificial Intelligence safety[31] at Bletchley Park[32] – home, during WWII, of Alan Turing – the author of the Turing-Test in 1950.[33]

Describing its signing as a “landmark” the Executive Order has the stated aim of ensuring that the United States of America leads the way in “seizing the promise and managing the risk of Artificial Intelligence”. The Executive Order establishes standards for AI safety and security while protecting the privacy, advancing equity and civil rights, standing up for consumers and workers and promotes innovation and competition. It also seeks to advance American leadership around the world. Innovation is given particular focus in the Executive Order which coalesces with the current administration’s comprehensive strategy for responsible innovation. 

As regards standards for AI safety and security the Executive Order requires that developers of the most powerful AI systems share they safety test results and other critical information with the US government, citing, also the Defense Production Act. The Order states that:

“[C]ompanies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.” 

The Order also sets down a standards body for Artificial Intelligence systems – National Institute of Standards and Technology which will set “rigorous standards for extensive red-team testing to ensure safety before public release”. The Department of Homeland Security and the Department of Energy  will also have a role applying those standards to critical infrastructure sectors, and will address AI systems’ threats to critical infrastructure as well as chemical, biological, radiological, nuclear, and cybersecurity risks. An AI Safety and Security Board will be set up. These actions are considered “the most significant actions ever taken by any government to advance the field of AI safety”.

The order also refers to protecting against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. Standards in this space will be established by “agencies that fund life-science projects” as a condition of federal funding, which, it is set down, will create “powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.”

The order also seeks to protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. Guidance for content authentication and watermarking[34] will be set down by the Department of Commerce and AI content will be clearly labelled as AI-generated content. This will be particularly the case for government communications and federal agencies will “make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world”.

The Order also seeks to establish “an advanced cybersecurity program” to develop AI tools to find and fix vulnerabilities in critical software. AI is described as producing “potentially game-changing cyber capabilities to make software and networks more secure.”

A National Security Memorandum is also ordered which directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This aspect of the Order is indicated for use of AI by the United States military and intelligence community, ensuring that such entities “use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.”

The Order also looks at the societal areas it seeks to protect: Privacy, Equity[35] and Civil Rights, Consumers, Patients, and Students, and Workers. With regard to privacy the Order is specific in its concerns stating that “AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems.” With this vulnerability in mind the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids. The following actions were also identified: 

Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques—including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data.

Strengthen privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy, by funding a Research Coordination Network to advance rapid breakthroughs and development. The National Science Foundation will also work with this network to promote the adoption of leading-edge privacy-preserving technologies by federal agencies.

Evaluate how agencies collect and use commercially available information—including information they procure from data brokers—and strengthen privacy guidance for federal agencies to account for AI risks. This work will focus in particular on commercially available information containing personally identifiable data.

Develop guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems. These guidelines will advance agency efforts to protect Americans’ data.

In respect of advancing Equity and Civil Rights the Order considered that AI, if used irresponsibly, can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing. On this issue the Order directs the following:

Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.

Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.

Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.

On the issue of consumers, the Order refers to the “real benefits” AI can bring by, for example, making products better, cheaper, and more widely available. However, contrariwise, AI “raises the risk of injuring, misleading, or otherwise harming Americans”. The Order directs as follows:

Advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs. The Department of Health and Human Services will also establish a safety program to receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI. 

Shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools.

With regard to supporting workers the Order recognises that “AI is changing America’s jobs and workplaces” in that it offers both the “promise of improved productivity but also the dangers of increased workplace surveillance, bias, and job displacement”. To mitigate these risks, the Order seeks to support workers’ ability to bargain collectively, and invest in workforce training and development. The Order directs: 

Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection. These principles and best practices will benefit workers by providing guidance to prevent employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize.

Produce a report on AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI.

Interestingly, on the issue of innovation, and competition, the Order specifically addresses this issue – at the intersection of promoting public safety while not discouraging, or slowing down, innovation. The Order states that “America already leads in AI innovation—more AI startups raised first-time capital in the United States last year than in the next seven countries combined.” The Order ensures that “we continue to lead the way in innovation and competition”. The following actions are set down:

Catalyze AI research across the United States through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and expanded grants for AI research in vital areas like healthcare and climate change.

Promote a fair, open, and competitive AI ecosystem by providing small developers and entrepreneurs access to technical assistance and resources, helping small businesses commercialize AI breakthroughs, and encouraging the Federal Trade Commission to exercise its authorities.

Use existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews.

Similar to other jurisdictions that are advancing right pursuant to regulation the government of the United States of America seeks to set down global standards. It refers to the challenges and opportunities of AI as being global and it seeks to “continue working with other nations to support safe, secure, and trustworthy deployment and use of AI worldwide.” The following actions were adumbrated: 

Expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI. The State Department, in collaboration, with the Commerce Department will lead an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety. In addition, this week, Vice President Harris will speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak.

Accelerate development and implementation of vital AI standards with international partners and in standards organizations, ensuring that the technology is safe, secure, trustworthy, and interoperable.

Promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, such as advancing sustainable development and mitigating dangers to critical infrastructure.

Finally the Order considers the responsible and effective Government use of AI. It considers that AI can “help government deliver better results”  by expanding agencies’ capacity to regulate, govern, and disburse benefits, and it can cut costs and enhance the security of government systems. It also points to the risks of AI – such as discrimination and unsafe decisions. On this area the Order directs the following actions: 

Issue guidance for agencies’ use of AI, including clear standards to protect rights and safety, improve AI procurement, and strengthen AI deployment. 

Help agencies acquire specified AI products and services faster, more cheaply, and more effectively through more rapid and efficient contracting.

Accelerate the rapid hiring of AI professionals as part of a government-wide AI talent surge led by the Office of Personnel Management, U.S. Digital Service, U.S. Digital Corps, and Presidential Innovation Fellowship. Agencies will provide AI training for employees at all levels in relevant fields.

The salient provisions of the Executive Order will now be considered.

Section 1 sets out the purpose of the Order and refers to the extraordinary potential for both promise and peril. 

“Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure.  At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.  Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks.  This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.”

Section 2 sets down guiding principles to advance and govern the development of Artificial Intelligence: safety and security, promotion of responsible innovation, competition, and collaboration, supporting American workers, advancing equity and civil rights, consumers, managing risks in use of AI by the Federal government,  becoming the international standard-barer in this space.

Section 3 sets down definitions and includes, for “artificial intelligence” the following:

“The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3):  a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.  Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”

And for generative AI the following:

“The term “generative AI” means the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content.  This can include images, videos, audio, text, and other digital content.” 

For “watermarking” it states as follows:

“The term “watermarking” means the act of embedding information, which is typically difficult to remove, into outputs created by AI — including into outputs such as photos, videos, audio clips, or text — for the purposes of verifying the authenticity of the output or the identity or characteristics of its provenance, modifications, or conveyance.”

Section 3 seeks to set down guidelines, standards and best practice for AI Safety and Security and sets down a timetable for various actions upon signing of the Executive Order. Section 5 looks to promote innovation and competition and sets out various markers including attracting talent to the United States, streamlining the processing times of visa petitions and applications, and other like measures, as well as a clear and comprehensive guide for AI experts and experts in other critical and emerging technologies to understand their options for working in the United States. 

Section 6 looks to support workers including by advancing the Government’s own understanding of the AI implications for workers, and seeks to “foster a diverse AI-ready workforce”.

Section 7 concerns the goal of advancing equity and civil rights including by strengthening AI and Civil Rights in the Criminal Justice System. Section 8 protects consumers, patients, passengers and students. Section 9 deals with the protection of privacy in particular in respect of mitigating privacy risks potentially exacerbated by AI including “AI’s facilitation of the collection or use of information about individuals”. Section 10 refers to the Federal Government use of AI. Section 11 seeks to strengthen American leadership abroad to “unlock AI’s potential and meet its challenges”. 


The Executive Order sets new standards for AI Safety and security stating that “as AI’s capabilities grow, so do its implications for Americans’ safety and security.” These include:

The Order also calls for better privacy protections stating that: “without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems.”  Bipartisan data privacy legislation is called for which: 

Advancing Equity and Civil Rights is another aim of the Order where it states that: “irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing.” The following actions are set out:

Consumers, Patients, and Students are specifically mentioned where the Order states that “AI can bring real benefits to consumers—for example, by making products better, cheaper, and more widely available. But AI also raises the risk of injuring, misleading, or otherwise harming Americans.” It direct the following actions:

The Order also states that “AI is changing America’s jobs and workplaces, offering both the promise of improved productivity but also the dangers of increased workplace surveillance, bias, and job displacement.” To mitigate these risks and to support workers’ ability to bargain collectively, and invest in workforce training the Order directs: 

Importantly the Order addresses the promotion of innovation and competition stating that “America already leads in AI innovation—more AI startups raised first-time capital in the United States last year than in the next seven countries combined.” The Order seeks to:

The Order also recognises the international element to advancing Artificial Intelligence protections and states that the challenges and opportunities presented by AI are “global”. It seeks to:

Finally the Order seeks to ensure responsible and effective Government use of AI stating that 

“AI can help government deliver better results for the American people. It can expand agencies’ capacity to regulate, govern, and disburse benefits, and it can cut costs and enhance the security of government systems. However, use of AI can pose risks, such as discrimination and unsafe decisions.” On this point the Order directs: 

Algorithmic Accountability Act 2022

The Bill was initiated in the Senate in 2022 and requires certain businesses that use automated decision systems to make critical decisions to study and report about the impact of those systems on consumers. The Bill has been read twice and was referred to the Committee on Commerce, Science and Transportation.[36]

Among its provisions it states at Section 3:

“The term “critical decision” means a decision or judgment that has any legal, material, or similarly significant effect on a consumer’s life relating to access to or the cost, terms, or availability of—

(A) education and vocational training, including assessment, accreditation, or certification;

(B) employment, workers management, or self-employment;

(C) essential utilities, such as electricity, heat, water, internet or telecommunications access, or transportation;

(D) family planning, including adoption services or reproductive services;

(E) financial services, including any financial service provided by a mortgage company, mortgage broker, or creditor;

(F) healthcare, including mental healthcare, dental, or vision;

(G) housing or lodging, including any rental or short-term housing or lodging;

(H) legal services, including private arbitration or mediation; or

(I) any other service, program, or opportunity decisions about which have a comparably legal, material, or similarly significant effect on a consumer’s life as determined by the Commission through rulemaking.”

Conclusion

Unsurprisingly the United States of America has been at the forefront in terms of Artificial Intelligence Regulation. Both its Blueprint and initial Executive Order of Responsible Use of Artificial Intelligence set out a clear direction of travel and mandate the United States government, and the legislature, to take particular actions. President Trump in rescinding the original order of President Biden is even more focused on ensuring the path is clear of all obstacles to innovation. The United States as global hegemon has the most to lose from Artificial Intelligence development that occurs in other jurisdictions and which usurps the potency of the technology available to it, although the issue of hegemony is now very much up in the air considering recent geopolitical upheavals. Still, as things stand, for this reason the United States, while initially, cautious, to embrace innovation with various specific measures set down to encourage and incentivise experts from outside the United States to work within its borders – has moved to an all-out push to innovate faster than anywhere else.[1]


[1] https://www.nytimes.com/2025/01/25/us/politics/trump-immigration-climate-dei-policies.html

The commitments have received criticism, however, with commentators critiquing that they are vague “sensible sounding pledges with lots of wiggle room”.[38] They have been described as pledges that “don’t actually require meaningful action from the companies”[39] and not backed by the force of law with no accompanying enforcement mechanism.[40] “The lack of accountability metrics also effectively takes the pressure off companies to solve difficult technical challenges, like detecting AI-generated outputs after they’re released to the public.” Overall, it was the view of one source, that the commitments “ultimately lack teeth”.[41] Another source considered the Executive Order “lays a light hand on the Department of Defence”.[42]  One author also says there is too much focus on the negative impacts of Artificial Intelligence and that laws should be better drafted to accommodate the multiple-positive-use-cases for the technology.[43] She states:“Our current tech policy is thin and plat. It conceals that, while such normative tensions have always been a part of democratic regime, we can steer technology’s course to mitigate such conflicts between normative values. Digital technology is already gaining comparative advantage over humans in detecting discrimination making more consistent, accurate, and non-discriminatory decisions; and addressing the world’s thorniest problems: climate, poverty, injustice, literacy, accessibility, speech, health, and safety. The role of public policy should be to oversee these advancements, verify capabilities, and build public trust of the most promising technologies.”[44]


[1] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

[2] https://www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government

[3] https://www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government

[4] https://www.whitehouse.gov/ostp/ai-bill-of-rights/

[5] https://www.whitehouse.gov/briefing-room/statements-releases/2022/09/16/fact-sheet-white-house-releases-first-ever-comprehensive-framework-for-responsible-development-of-digital-assets/

[6] https://www.whitehouse.gov/briefing-room/statements-releases/2023/02/16/fact-sheet-president-biden-signs-executive-order-to-strengthen-racial-equity-and-support-for-underserved-communities-across-the-federal-government/

[7] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

[8] https://coag.gov/press-releases/3-15-23/

[9] https://www.cga.ct.gov/2023/ba/pdf/2023SB-01103-R000228-BA.pdf

[10] https://law.lis.virginia.gov/vacodefull/title59.1/chapter53/

[11] https://www.bytebacklaw.com/2024/05/colorado-legislature-passes-first-in-nation-artificial-intelligence-bill/

[12] Ibid.

[13] https://le.utah.gov/~2024/bills/static/SB0149.html

[14]https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=1798.185&lawCode=CIV#:~:text=(I)%20Global%20opt%20out%20from,of%20My%20Sensitive%20Personal%20Information.”

[15]https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=BPC&division=7.&title=&part=3.&chapter=6.&article=

[16] https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047

[17] 2024 SB-1047

[18] Sec 3, 22603, B(4)(a)(1)“(i) The developer will not produce a covered model with a hazardous capability or enable the production of a derivative model with a hazardous capability;” Sec 3, 22602, (n)(1) ““Hazardous capability” means the capability of a covered model to be used to enable any of the following harms in a way that would be significantly more difficult to cause without access to a covered model: model that does not qualify for a limited duty exemption:

(A) The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties.

(B) At least five hundred million dollars ($500,000,000) of damage through cyberattacks on critical infrastructure via a single incident or multiple related incidents.

(C) At least five hundred million dollars ($500,000,000) of damage by an artificial intelligence model that autonomously engages in conduct that would violate the Penal Code if undertaken by a human.””

[19] https://www.ft.com/content/eee08381-962f-4bdf-b000-eeff42234ee0

[20] https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html?searchResultPosition=5

[21] https://www.ft.com/content/eee08381-962f-4bdf-b000-eeff42234ee0

[22] “Implement the capability to promptly enact a full shutdown of the covered model.” Sec 3, 22603, (4)(b)((2)

[23] https://www.ft.com/content/eee08381-962f-4bdf-b000-eeff42234ee0

[24] Colorado, Connecticut, Virginia, New York and California

[25] https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=4015&ChapterID=68

[26] https://www.whitehouse.gov/ostp/ai-bill-of-rights/

[27] https://www.wired.com/story/opinion-bill-of-rights-artificial-intelligence/?utm_source=onsite-share&utm_medium=email&utm_campaign=onsite-share&utm_brand=wired

[28] https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf

[29] https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

[30] President Trump rescinded this Order 01/2025.

[31] https://www.aisafetysummit.gov.uk

[32] See chapter 8 for comment.

[33] See chapter 2

[34] See “Copyright Law – Technology – Tech Companies Agree to Develop Mechanisms for Identifying AI-Generated Works – Voluntary Commitments from Leading Artificial Intelligence Companies on July 21, 2023.” Harvard Law Review, vol. 137, no. 4, February 2024, pp. 1282-[i]. HeinOnline, https://heinonline-org.ucd.idm.oclc.org/HOL/P?h=hein.journals/hlr137&i=1304 wherein it is posited that watermarking may lead to a default position whereby the tech companies retain ownership of copyrights to AI-generated works. (at 1284)

[35] This is defined in the Blueprint for an AI Bill of Rights as follows: “Equity” means the consistent and systematic fair, just, and impartial treatment of all individuals. Systematic, fair, and just treatment must take into account the status of individuals who belong to underserved communities that have been denied such treatment, such as Black, Latino, and Indigenous and Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of religious minorities; women, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and intersex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons otherwise adversely affected by persistent poverty or inequality.” https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf

[36] https://www.congress.gov/bill/117th-congress/senate-bill/3572#:~:text=This%20bill%20requires%20certain%20businesses,of%20those%20systems%20on%20consumers.

[37] https://www.nytimes.com/2025/01/25/us/politics/trump-immigration-climate-dei-policies.html

[38] Recent Events 137 Harv. L. Rev 1282 at 1283 citing KevinRoose,HowDotheWhiteHouse’sA.I.CommitmentsStackUp?,N.Y.TIMES(July 22, 2023), https://www.nytimes.com/2023/07/22/technology/ai-regulation-white-house.html [https:// perma.cc/UAL6-AWLG].

[39] Recent Events 137 Harv. L. Rev 1282 at 1283.

[40] Ibid.

[41] Recent Events 137 Harv. L. Rev 1282 at 1283.

[42] Demchak, Chris C., and Sam J. Tangredi. “2023 Executive Order on Trustworthy AI Misses Issues of Autonomy and AI Multi-Threat Challenges.” The Cyber Defense Review, vol. 9, no. 1, 2024, pp. 25–34. JSTOR, https://www.jstor.org/stable/48770662. Accessed 2 June 2024 at 25.

[43] Orly Lobel, The Law of AI for Good, 75 Fla. L. Rev. 1073 (2023). See https://www.floridalawreview.com/article/91298-the-law-of-ai-for-good

[44] Orly Lobel, The Law of AI for Good, 75 Fla. L. Rev. 1073 (2023) at 1079. See https://www.floridalawreview.com/article/91298-the-law-of-ai-for-good

Chapter 9

The European Union Artificial Intelligence Act

“The EU’s AI Act is the first-ever comprehensive legal framework on Artificial Intelligence worldwide. So, this is a historic moment. The AI Act transposes European values to a new era.”[1] Ursula von der Leyen, President of the European Commission

Introduction

This chapter will look at critical developments in the area of Artificial Intelligence regulation in the European Union. It will focus principally on the EU’s AI Act: the world’s most comprehensive horizontal legal framework on regulation of Artificial Intelligence systems. It will look in some detail at the provisions, and where relevant will give background on the legislative process. It will focus in particular on the issue of regulation for general purpose artificial intelligence systems. Earlier drafts had omitted these systems as the Act’s provisions had instead focused on the intended purpose of the various systems. Following an effective lobby such systems were included in the final text and these will be addressed. Liability for Artificial Intelligence systems is also contemplated by the EU and proposals in this respect are dealt with elsewhere in the text.[2]

Ireland’s path with Artificial Intelligence

Ireland ranked in second place within the EU in 2020 in terms of per capita expenditure on investments in AI.[3] The country has already embarked on its path to regulate this space – publishing a national strategy on AI in 2021.[4] It stated:

“The world is changing. We are becoming greener, more sustainable, and more digital. These changes have been accelerated since the onset of the COVID-19 crisis. Overnight, businesses and workers had to adapt to a new reality, embracing new technologies and new ways of working. Digitisation is transforming our lives and our economy, and artificial intelligence (AI) will be at the forefront of this transformation. As we move into the early phases of a recovery, we must look ahead at the opportunities presented by AI and other technologies to build back to a society and economy that is stronger, fairer and more resilient. AI is not a technology of the future, it is a technology of the present. Given its wide application to all sectors, and its high capacity for impact, growth and contribution to improving competitiveness, AI is one of the technologies with the greatest potential for transformation in all areas of productive activity. But not just that, AI also poses significant opportunities in addressing and overcoming pressing societal challenges and creating new value and possibilities for everyone not just the economy.”[5]

The strategy commits Ireland to an AI approach which is responsible, ethical and trustworthy citing three key areas for harnessing these objectives: a legal framework that will fill in any “gaps” in the existing legal framework, as well as pointing to the adoption of the EU AI Act; ethics where the objective of supporting the ethical values of society and human rights, are key, and standards and certification which will be used to underpin both legal and ethical obligations. The report also strives to adopt AI in Irish enterprise and ensure that AI serves the public, including with the use of embedded AI into the provision of certain public services. The strategy also calls for a strong AI innovation ecosystem stating “if we want AI to thrive, a strong and supportive ecosystem for AI innovation is essential.” It cites our strong industry-academic research credentials built around Science Foundation Ireland as well as programmes and academic researchers internationally renowned for their excellence in AI. 

The strategy also points to AI education, skills and talent stating that “one in three jobs in Ireland” is likely to be disrupted by the adoption of digital technologies and stating that “our workforce needs to be prepared for the impact of AI”. A Supportive and secure infrastructure for AI was also addressed with high-quality and trustworthy data, robust data governance and privacy frameworks, all mentioned. 

The report states:

“Ireland will be an international leader in using AI to the benefit of our population, through a people-centred, ethical approach to AI development, adoption and use.”

Objectives given in the report include:

“1. Strong public trust in AI as a force for societal good in Ireland

2. An agile and appropriate governance and regulatory environment for AI

3. Increased productivity through a step change in AI adoption by Irish enterprise

4. Better public service outcomes through a step change in AI adoption by the Irish public sector

5. A strong Irish ecosystem for high-quality and responsible AI research and innovation

6. A workforce prepared for and adopting AI

7. A data, digital and connectivity infrastructure which provides a secure foundation for AI development and use in Ireland”

Artificial Intelligence is defined as:

“Artificial Intelligence (AI) refers to machine-based systems, with varying levels of autonomy, that can, for a given set of human-defined objectives, make predictions, recommendations or decisions using data.”

It indicates how AI is adopted for societal good and sustainability[6] as well as touching upon the risks of AI[7]. On this point the strategy states:

“There is a risk that AI systems could lead to unfair discrimination and unequal treatment. The risk of discrimination can arise in many ways, for instance biased training data, biased design of algorithms, or biased use of AI systems.” [8]

It also states:

“AI-based systems have the potential to exacerbate existing structural inequities and marginalisation of vulnerable groups. For instance, AI-based facial recognition technology that has been trained disproportionately on lighter skin tones may be significantly less accurate in relation to people of colour and can thus exhibit higher false positive rates for this population.”[9]

The Irish Government announced a refresh of the strategy in 2024[10] to take account of significant developments since the original was published. 

General Purpose AI systems and Large Language Models

While the Irish strategy adequately covers most ground in this area, certainly in terms of the current state of the art, and while the EU approach, at least initially in its drafting of the proposed EU AI Act broadly coalesced with this approach, the issue of the future development of Artificial General Intelligence began to raise its head in the literature, becoming especially noticeable around 2022. As we know, from our earlier reading, the subject of Artificial General Intelligence denotes the aim of creating an intelligence which is at least equivalent to that of a human – and possibly more so. This issue, the question of regulation of systems that could lead to the establishment of Artificial General Intelligence, took the form of an excoriation of the earlier draft provisions of the EU AI Act insofar as they omitted so-called  general purpose artificial intelligence systems (“GPAIS”) – systems for AI which lack a defined specific purpose.

As we know from our earlier reading,[11] AI, in its more specific context, or, AGI, was first mentioned as a term of art in the 1950s,[12] more than anything, an aspiration, to build artificially a machine which could at least match the equivalent intelligence of a human being. We have considered the provisions of the Turing-Test and how the processing power was simply not available to researchers in this field for several decades. With the rise of computerisation in the 1980s, and a continuing stream of commentary in the literature, as well as in various science fiction depictions, the dream of Artificial General Intelligence was kept alive. Various timeframes for its anticipated market deployment have been proposed over the ensuing decades, all of them proving erroneous, and, to this day, there is still no certainty when we can expect its adoption.[13] There are some sources that believe that with the release to market of Chat GPT in late-2022 we are now within touching distance of achieving AGI. Those opinions may prove erroneous however.[14] The year 2040 has been mentioned by several sources as the year we can expect this type of advancement to be launched into the market place – although it may be sooner.[15]  Some recent indicators point to the closeness of its adoption: in October 2022, the United States of America implemented an export ban[16] of high-end chips, those used in artificial intelligence systems, to China – in a move many saw as an attempt to slow down that country’s development of AI[17]– although some commentators doubt whether the move will have the desired impact.[18]  The USA and China are considered the most advanced in this sphere.[19] And we have already mentioned closer co-operation between the two.[20]

In any event, the issue for the EU was that so-called general purpose artificial intelligence systems, those lacking a specific purpose, were falling outside the scope of the new proposed law. The question arose whether such systems could eventually lead to Artificial General Intelligence. And, if yes, would it not be in the interests of safety to include provisions in the new law to that effect. While the debate around whether general purpose artificial intelligence systems may eventually led to AGI is beyond the scope of this book, and, suffice to say, there is a difference of opinion over the issue[21] – it was clear to the EU, in the end, that such systems should be covered by the proposed new law. The drive for their inclusion was almost entirely the work of the European Parliament[22]  – led in Ireland by Deirdre Clune MEP.[23]

Let’s remember, from Chapter 2, the main concern with Artificial General Intelligence is whether this type of intelligence creation will result in what one commentator termed in 2001[24] as “Friendly AI” – in other words whether humans can create an intelligence which is at least as equivalent to human intelligence and which complements our interactions. This has become known as the problem of “alignment” and it is a pivotal issue in this space.[25] Simply put the question for those developing systems of this type is whether or not their creation will result in an intelligence superior, or, stronger still, far superior, to a human being – such that the intelligence created constitutes an existential risk to humans: the subject of the seminal book Superintelligence.[26] We have already looked at various take-off scenarios including one which would give humans only minutes to react in the event an Artificial General Intelligence suddenly FOOMed.[27] There are also particularised issues which arise including the avoidance of creating such a ‘FOOM’ event – or a sudden spike in the intelligence of the AI that has been created. While, as we’ve discussed,[28]one commentator puts the risk of this occurring at less than 1 per cent, another put the risk far higher – at greater than 10 per cent.[29]

Into this regulatory space enter the Commission of the European Union which first put forward in 2021 a text for adoption of an Artificial Intelligence Act,[30] described by one organisation as a “ground breaking” [31] piece of legislation and by one commentator as “pioneering” and a “significant legislative milestone”.[32] The Act, which is actually a proposal for a Regulation by the European Parliament and Council, named here as the EU AI Act, set about creating a risk classification regime that groups AI systems into three categories: unacceptable, high risk, and limited risk.[33] A reference to minimal risk/low risk which had been carried in an earlier version of the Regulation[34] was not included in the final draft and the European Commission said these enjoy a “free pass”[35] though the Regulation in Article 69 does refer to the drawing up of codes of conduct “intended to foster the voluntary application to AI systems other than high-risk AI systems”.[36] The Act achieves its risk-centred approach by focusing on the “intended purpose”[37] of the AI system.[38] Intended purpose is defined in Article 3 as follows:

“‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation.”

Risk Assessment

By defining different risk categories the AI Act turns importantly on the classification of the given AI system into one or other category – as this determines the obligations which follow: the greater the risk posed by AIs the greater the legal safeguards to minimise it. It has been noted in the literature that the Regulation lacks a clear methodology for the assessment of these risks in concrete situations as risk are broadly categorized based on the application areas of AI systems and risk factors[39] and relies on a “static view of AI risk”.[40] AI, say the authors, is mostly seen as a product, akin to EU product safety legislation.[41] They propose a risk assessment model that identifies and combines specific risk factors influencing real-world AI application scenarios. 

“Accordingly, the risk of an event is assessed by the interplay between (1) determinants of risk (i.e., hazard, exposure, vulnerability, and responses), (2) individual drivers of determinants, and (3) other types of risk (i.e., extrinsic, and ancillary risks).  This framework can provide a more accurate risk magnitude of AIs under a specific scenario. This is a measure defined based on hazard chains, the trade-off among impacted values, the aggregation of vulnerability profiles, and the contextualisation of AI risk with risks from other sectors.”[42]

The Regulation provision on the requirement of a fundamental rights risk assessment should also be noted. Article 27 requires in certain circumstances such an assessment for High-Risk AI Systems by the deployer wherein the assessment consists of:

Once the assessment has been performed the deployer “shall notify the market surveillance authority of the results of the assessment”.[43] A data protection impact assessment conducted pursuant to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680 may facilitate the fundamental rights assessment to be conducted in conjunction with that assessment.[44]

Authors Floridi, Luciano and Holweg, Matthias and Taddeo, Mariarosaria and Amaya, Javier and Mökander, Jakob and Wen, Yuni,[45] have developed capAI, a conformity assessment procedure for AI systems, to provide an independent, comparable, quantifiable, and accountable assessment of AI systems that conforms with the EU AI Act.

High-Risk

 One of the categories of high risk systems identified are those explicitly set out in Annex III of the AI Act as it has been amended.[46] The final amended version of Annex III is set out here:

“1. Biometrics 

(aa) AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics;[48]

(ab) AI systems intended to be used for emotion recognition.[49]

2. Critical infrastructure: 

(a) AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity; 

3. Education and vocational training

(ba) AI systems intended to be used for the purpose of assessing the appropriate level of education that individual will receive or will be able to access, in the context of/within education and vocational training institution;[50]

(bb) AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of/within education and vocational training institutions.[51]

4. Employment, workers management and access to self-employment: 

5. Access to and enjoyment of essential private services and essential public services and benefits: 

(ca) AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.[53]

6. Law enforcement, insofar as their use is permitted under relevant Union or national law: 

7. Migration, asylum and border control management, insofar as their use is permitted under relevant Union or national law: 

(da) AI systems intended to be used by or on behalf of competent public authorities, including Union agencies, offices or bodies, in the context of migration, asylum and border control management, for the purpose of detecting, recognising or identifying natural persons with the exception of verification of travel documents.

8. Administration of justice and democratic processes: 

(aa) AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems whose output natural persons are not directly exposed to, such as tools used to organise, optimise and structure political campaigns from an administrative and logistic point of view.”[55]

As already stated, these are merely the high-risk systems expressly set out in the proposed text of the EU AI Act and the list in Annex III of what constitutes a high risk system is consequently not intended to be exhaustive.[56]

One difficulty with this approach is that the risk classification system, as initially written, was dependent on the “intended purpose of the AI system to be assessed,” a point we have already mentioned.[57] Consequently the question arose: what about AI systems which do not have an intended purpose? Leading the way in its criticism of the original provision was an organisation called the Future of Life Institute, and its President, Professor of Physics at MIT, and author of Life 3.0, Max Tegmark. That organisation put the matter well:

“Among its proposals, [the AI Act] creates a risk classification framework that groups AI systems into four categories: unacceptable, high risk, limited risk, and low risk.[58] Acknowledging the changing nature of technology, authorities incorporated a provision to continuously assess the risk classification of systems. In making these determinations, the EU is instructed to consider “the intended purpose of the AI system.” This provision raises a critical issue, which is that AI systems may escape or evade the Act’s safeguards because there can be a complex mapping between who develops and deploys them, the tasks they perform, and the purpose(s) they serve as a product.”[59]

A debate consequently crystalised in the literature around (i) the meaning of a term subsequently articulated by the EU as General Purpose Artificial Intelligence Systems (“GPAIS”); and (ii) whether those systems, once an agreed standardised definition has been agreed, should fall into the exemptions provided in the AI Act or not. These two issues will each be considered in turn.

On the first, it appears that in the debate which surrounded the absence of a term equivalent to a general purpose AI system from the proposed text of the AI Act, the Council, under the presidency of Slovenia, proposed the following definition:

“AI system… able to perform generally applicable functions such as image/speech recognition, audio/video generation, pattern detection, question answering, translation, etc.”[60]

The French EU presidency defined GPAIS as systems that “may be used in a plurality of contexts and be integrated in a plurality of other AI systems” [61] and the Czech EU presidency refers to systems that are intended by the provider to perform generally applicable functions, such as image/speech recognition, and in a plurality of contexts.[62]

It has been stated by at least one source, considering the above proposals by the Council, that those definitions were inadequate, that the meaning of the term GPAIS in industry varies and that no uniform definition exists:

“The AI Act has generated a need for an actionable definition of GPAIS where none currently exists. Prior to its adoption by the EU, scant literature identifies AI systems as GPAIS. When it does, it describes a range of technologies with vastly different levels of competency.”[63]

That organisation presented the following proposed definition:

“An AI system that can accomplish or be adapted to accomplish a range of distinct tasks, including some for which it was not intentionally and specifically trained.”[64]

In the result the European Council prepared (in November 2022) its compromise text of the AI Act for the Committee of the Permanent Representatives of the Governments of the Member States to the European Union (“Coreper”) which included the following definition for GPAIS:

“‘general purpose AI system’ means an AI system that – irrespective of how the modality in which it is placed on the market or put into service, including as open source software – is intended by the provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others; a general purpose AI system may be used in a plurality of contexts and be integrated in a plurality of other AI systems”[65]

In the final draft of the EU AI Act the following definition was used:

“general purpose AI system” means an AI system which is based on a general purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems;[66]

Second, on the question of whether GPAIS, once appropriately defined, should be exempted from the proposed regulations, the Future of Life Institute said such systems should definitely fall into the Act and that the EU should take the opportunity to prevent what it describes as a “regulatory gap” which could surface due to the EU’s emphasis on a system’s “intended purpose”.[67] That organisation stated that a failure to do this “may catalyse important long-term risks that the region and rest of the world should proactively avoid.”[68]  This appears something which the European Council were prepared to countenance when, in one of their earlier contributions to this debate,  it proposed an exemption[69] for systems of this type. 

The Council stated:

“In particular, it is necessary to clarify that general purpose AI systems – understood as AI system (sic) that are able to perform generally applicable functions (…) – should not be considered as having an intended purpose within the meaning of this Regulation. Therefore the placing on the market, putting into service or use of a general purpose AI system, irrespective of whether it is licensed as open source software or otherwise, should not, as such, trigger any of the requirements or obligations of this Regulation.”[70]

Ten civil society organisations turned to the European Parliament, in October 2022, and asked for it to adopt obligations in the AI Act on the providers of GPAIS on the grounds inter alia that “these systems come with great potential for harm”.[71] One such organisation says such harm is already occurring citing systems propagating extremist content, encouraging self-harm, exhibiting anti-Muslim bias, and inadvertently revealing personal data.[72] There would be, however, political push-back to this position: it was announced in Autumn 2022 that the unofficial USA policy on the European AI Act was to propose “a broader exemption for general purpose AI.”[73]

Finally, it was reported that the Czech Presidency of the EU Council in 2022 proposed that the European Commission should tailor the obligations of the AI regulation to the specificities of general purpose AI at a later stage via an implementing act.[74] By contrast, the US administration warned that placing risk-management obligations on these providers could prove “very burdensome, technically difficult and in some cases impossible” – consequently it appeared it was not in favour of bringing these systems within the rubric of regulation at all.[75]

By November 2022 the text prepared by the European Council for Coreper made reference, in its Title 1A on General Purpose AI systems, to a Commission implementing Act following its own investigations in the area to be conducted no later than 18 months following entry into force of the AI Act.[76]  Furthermore, that provision would not apply where a General Purpose AI provider has “explicitly excluded all high-risk uses in the instructions of use or information accompanying the general purpose AI system”.[77] William Fry solicitors, in a note, at the time, helpfully explained the meaning here:

“This means if a GPAI was intended to be used to create pictures of cute cats but could also be used to programme drones to kill all cats in the world, provided that the GPAI’s instructions of use say “this AI System is not to be used to bring about the extinction of cats”, then [the relevant article] does not apply to that system.”[78] 

So what of the final version. In the result General Purpose AI systems were included in the final text. This was seen as a huge boon to those who had focused on the overarching requirement of safety of the systems. 

“General purpose AI systems may be used as high-risk AI systems by themselves or be components of other high risk AI system. Therefore, due to their particular nature and in order to ensure a fair sharing of responsibilities along the AI value chain the providers of such systems should, irrespective of whether they may be used as high-risk AI systems as such by other providers or as components of high-risk AI systems and unless provided otherwise under this Regulation, closely cooperate with the providers of the respective high-risk systems to enable their compliance with the relevant obligations under this Regulation and with the competent authorities established under this Regulation.”[79]

High Risk systems need to satisfy the requirements of Chapter III of the EU AI Act including the establishment of a risk management system, a quality criteria shall apply to testing regimes, technical documentation shall be drawn up before the system is placed on the market, as well as record-keeping, and obligations around transparency where high-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable deployers to interpret the system’s output and use it appropriately. Human oversight is also a feature of high-risk systems as well as responsibilities in terms of the accuracy and robustness of the system and exigencies of cybersecurity. 

Chapter III of the EU AI Act places obligations on providers of high-risk AI systems including ensuring that their systems are compliant with the requirements elsewhere in that Chapter: including an obligation to put a quality management system in place, (Article 17) and shall keep documentation for a period of 10 years (Article 18) and automatically generate logs (Article 12).[80] There are responsibilities for corrective actions and a duty of information where providers shall inform the distributors of a high-risk system where the system no longer complies with the Regulation. There is also a duty to cooperate with the competent authorities. A fundamental rights impact assessment for high-risk AI systems is also featured. (Article 27) 

A new Chapter V on general purpose AI models is included in the final text. It classifies a general purpose AI model as one with systemic risk if it meets any of the following: it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks; and/or based on a decision of the Commission, ex officio or following a qualified alert by the scientific panel that a general purpose AI model has capabilities or impact equivalent to those of the previous point. (Article 51)

Where such a model is found to exist the relevant provider shall “notify the Commission without delay” within 2 weeks after those requirements, already mentioned, are met or it becomes known that those requirements will be met. The Commission shall keep a list of such models and shall publish it without prejudice to the need to respect and protect intellectual property rights and confidential business information or trade secrets.[81]

Section 2 of Chapter V sets down obligations for providers of general purpose AI models, and, in Section 3 obligations for providers of general purpose AI models with systemic risk. These obligations shall include all of the obligations in the previous section, and, shall also include performing model evaluation, assessment and mitigation of possible systemic risks at Union level, keeping track of relevant information about serious incidents, ensure an adequate level of cybersecurity protection for the general purpose AI model with systemic risk.  

European Union “AI Act”

The Act, as it operates as a Regulation, is designed to have horizontal impact across the Member States in a uniform manner.[82] It is based on a “future-proof” definition of AI and sets out with the aim in mind of creating “trustworthy AI” systems. The purpose of the Regulation (Article 1) is to

“improve the functioning of the internal market and promoting the uptake of human centric and trustworthy artificial intelligence, while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, rule of law and environmental protection against harmful effects of artificial intelligence systems in the Union and supporting innovation.“[83]

It can concern both providers (e.g. a developer of a CV-screening tool) and deployers of high-risk AI systems (e.g. a bank buying this screening tool). Importers of AI systems will also have to ensure that the foreign provider has already carried out the appropriate conformity assessment procedure, bears a European Conformity (CE) marking and is accompanied by the required documentation and instructions of use.[84] There are also obligations on a distributor[85]and the overarching concept of operator.[86] Distributors have an obligation to ensure that a high-risk AI system bears the required CE conformity marking.[87]

The distinction between providers, on the one hand, and deployers, on the other, is important as the obligations on either can vary in accordance with the various terms of the Regulation. Operators, likewise, have specific obligations: Article 99, on penalties, for instance applies to “infringements of this Regulation by operators”; Article 79 refers to the obligation on the “operator of an AI system to take corrective action”; Article 82 places the onus on an operator to “take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service” no longer presents an identified risk.  Deployers may have obligations to both providers and distributors like in Article 26 where it states: “When [the deployer has] reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk within the meaning of Article 79(1) they shall, without undue delay, inform the provider or distributor and relevant market surveillance authority and suspend the use of the system.”[88]

The legal framework will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market or its use affects people located in the EU.[89] Obligations generally do not apply to research, development and prototyping activities preceding their release on the market, and the regulation furthermore does not apply to AI systems that are exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.[90]

A risk-based approach is adopted in the Act. Kaminski in a paper[91] states there are four different models of risk regulation:

“There are at least four different models of risk-regulation: a highly quantitative version, a version that uses risk regulation as democratic oversight; a version focused on allocating regulatory resources by risk, and enterprise risk management. Often, policymakers do not explicitly specify which model they are pursuing. Often, too, they deploy more than one model at once. In the AI risk regulation context, this has led to recurring conflicts between stakeholders.”[92]

The  EU AI Act is considered by the author as falling into the third model.[93] The Act follows a risk-based approach requiring AI systems to be classified and assessed based on risk level and imposing corresponding requirements:

Minimal risk defines the vast majority of AI systems and include systems like spam filters, AI-enabled video gaming and inventory management systems. Systems of this type benefit from a free-pass and absence of obligations,[94] as these systems present only minimal or no risk for citizens’ rights or safety. On a voluntary basis, companies may nevertheless commit to additional codes of conduct for these AI systems.[95]

Limited risk systems may be caught by transparency requirements meaning a user interacting with the system should be aware that they are interacting with a machine. [96]  They are also caught by the requirements in Article 1 mentioned above: health, safety[97] and adherence to fundamental rights. Limited risk systems, to be classified as such, need to fall into one of the criteria set out in Recital 53.[98] Examples of systems that fall into this category are image editing software and PDF generators. Deepfakes also fall into this category: see discussion later in this chapter. 

Towards the other end of the scale High-risk systems[99] are those which concern certain critical infrastructures,[100]for instance, in the fields of water, gas and electricity;[101] medical devices;[102] systems to determine access to educational institutions[103] or for recruiting people;[104] or certain systems used in the fields of law enforcement,[105] border control,[106] administration of justice and democratic processes.[107] Systems of this type are subject to strict requirements including risk-mitigation systems,[108] high quality of data sets,[109] logging of activity,[110]detailed documentation,[111] clear user information,[112] human oversight,[113] and a high level of robustness, accuracy and cybersecurity.[114] Regulatory sandboxes, see above for definition, will also facilitate responsible innovation and the development of compliant High-risk systems.[115] Moreover, biometric identification: automated recognition of physical, physiological and behavioural human features such as the face, eye movement, body shape, voice, prosody, gait, posture, heart rate, blood pressure, odour, keystrokes characteristics,[116] and, emotion recognition systems[117] are also considered high-risk.

The category of unacceptable risk defines the range of systems which are prohibited by law. These are AI system considered a clear threat to the fundamental rights of people and includes AI systems or applications that manipulate behaviour to circumvent users free will, such as toys using voice assistance encouraging dangerous behaviour or minors, or, systems that allow ‘social scoring’ by government or companies, and certain applications of predictive policing.[118] These types of systems are automatically prohibited by the Regulation. Recital 29 is important and is set out in full. The Recital states:

“AI-enabled manipulative techniques can be used to persuade persons to engage in unwanted behaviours, or to deceive them by nudging them into decisions in a way that subverts and impairs their autonomy, decision-making and free choices. The placing on the market, putting into service or use of certain AI systems with the objective to or the effect of materially distorting human behaviour, whereby significant harms, in particular having sufficiently important adverse impacts on physical, psychological health or financial interests are likely to occur, are particularly dangerous and should therefore be forbidden. Such AI systems deploy subliminal components such as audio, image, video stimuli that persons cannot perceive as those stimuli are beyond human perception or other manipulative or deceptive techniques that subvert or impair person’s autonomy, decision-making or free choices in ways that people are not consciously aware of, or even if aware they are still deceived or not able to control or resist. This could be for example, facilitated by machine-brain interfaces or virtual reality as they allow for a higher degree of control of what stimuli are presented to persons, insofar as they may be materially distorting their behaviour in a significantly harmful manner. In addition, AI systems may also otherwise exploit vulnerabilities of a person or a specific group of persons due to their age, disability within the meaning of Directive (EU) 2019/882, or a specific social or economic situation that is likely to make those persons more vulnerable to exploitation such as persons living in extreme poverty, ethnic or religious minorities. Such AI systems can be placed on the market, put into service or used with the objective to or the effect of materially distorting the behaviour of a person and in a manner that causes or is reasonably likely to cause significant harm to that or another person or groups of persons, including harms that may be accumulated over time and should therefore be prohibited. The intention to distort the behaviour may not be presumed if the distortion results from factors external to the AI system which are outside of the control of the provider or the deployer, meaning factors that may not be reasonably foreseen and mitigated by the provider or the deployer of the AI system. In any case, it is not necessary for the provider or the deployer to have the intention to cause significant harm, as long as such harm results from the manipulative or exploitative AI-enabled practices. The prohibitions for such AI practices are complementary to the provisions contained in Directive 2005/29/EC, notably unfair commercial practices leading to economic or financial harms to consumers are prohibited under all circumstances, irrespective of whether they are put in place through AI systems or otherwise. The prohibitions of manipulative and exploitative practices in this Regulation should not affect lawful practices in the context of medical treatment such as psychological treatment of a mental disease or physical rehabilitation, when those practices are carried out in accordance with the applicable legislation and medical standards, for example explicit consent of the individuals or their legal representatives. In addition, common and legitimate commercial practices, for example in the field of advertising, that are in compliance with the applicable law should not in themselves be regarded as constituting harmful manipulative AI practices.”

Likewise Chapter II on prohibited AI practices is important. Article 5 states:

  1. The following artificial intelligence practices shall be prohibited:

(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm;

(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective to or the effect of materially distorting the behaviour of that person or a person pertaining to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm;

(ba) the placing on the market or putting into service for this specific purpose, or use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. This prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement;

(c) the placing on the market, putting into service or use of AI systems for the evaluation or classification of natural persons or groups thereof over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:

(i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts that are unrelated to the contexts in which the data was originally generated or collected;

(ii) detrimental or unfavourable treatment of certain natural persons or groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;

(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement unless and in as far as such use is strictly necessary for one of the following objectives:

(i) the targeted search for specific victims of abduction, trafficking in human beings and sexual exploitation of human beings as well as search for missing persons;

(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;

(iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purposes of conducting a criminal investigation, prosecution or executing a criminal penalty for offences, referred to in Annex IIa and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years. This paragraph is without prejudice to the provisions in Article 9 of the GDPR for the processing of biometric data for purposes other than law enforcement;

(da) the placing on the market, putting into service for this specific purpose, or use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person to commit a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; This prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity;

(db) the placing on the market, putting into service for this specific purpose, or use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;

(dc) the placing on the market, putting into service for this specific purpose, or use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions except in cases where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;

(iiid) deleted. (…)

Companies failing to comply with the Regulation will be fined. Fines range from €35 million or 7% of global annual turnover (whichever is higher) for violations of banned AI applications,[119] €15 million or 3% for violations of other obligations[120] and €7.5 million or 1% for supplying incorrect information.[121] Fines for providers of general purpose AI models are set down in the new Article 72a and amount to €15 million or 3%. 

Governance is also dealt with in the Regulation.[122] A new European AI Office within the European Commission has been set up for the purpose of supervising implementation of the new rules at national level.[123]

Some of the salient terms in the Regulation are as follows:

Article 2 sets out the scope of the Regulation. It states:

1. This Regulation applies to:

(a) providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or who are located within the Union or in a third country;

(b) deployers of AI systems that have their place of establishment or who a            re located within the Union;

(c) providers and deployers of AI systems that have their place of establishment or who are located in a third country, where the output produced by the system is used in the Union;

(ca) importers and distributors of AI systems;

(cb) product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;

(cc) authorised representatives of providers, which are not established in the Union;

(cc) affected persons that are located in the Union.

In Article 3 there are definitions for the following:

“Risk” means the combination of the probability of an occurrence of harm and the severity of that harm”

“AI system” is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

This definition was heavily influenced by the OECD definition:

“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”[124]

“Sandbox Plan” means a document agreed between the participating provider and the competent authority describing the objectives, conditions, timeframe, methodology and requirements for the activities carried out within the sandbox.”

“AI regulatory sandbox” means a concrete and controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision.”

“deep fake” means AI generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful”.

“general purpose AI model” means an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities”

Article 4 refers to AI literacy and requires providers and deployers of AI systems to ensure, to their best extent, that there is a sufficient level of AI literacy of their staff and other persons dealing with the operation of use of AI systems.

Article 4 states:

“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on which the AI systems are to be used.”

Article 5 deals with the prohibited Artificial Intelligence practices and refers to an AI system that uses “subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques with the object, or effect, of materially distorting a person’s behaviour, or, the placing on the market of an AI system that exploits any vulnerabilities of a person due to their age, disability, or a specific or economic situation, again, with the objective or effect of materially distorting the behaviour of that person. These types of systems are prohibited.[125]

Article 5 states:

1. The following artificial intelligence practices shall be prohibited:

(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm;

(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective to or the effect of materially distorting the behaviour of that person or a person pertaining to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm;

(ba) the placing on the market or putting into service for this specific purpose, or use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. This prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement;

(c) the placing on the market, putting into service or use of AI systems for the evaluation or classification of natural persons or groups thereof over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:

(i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts that are unrelated to the contexts in which the data was originally generated or collected;

(ii) detrimental or unfavourable treatment of certain natural persons or groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;

(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement unless and in as far as such use is strictly necessary for one of the following objectives:

(i) the targeted search for specific victims of abduction, trafficking in human beings and sexual exploitation of human beings as well as search for missing persons;

(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;

(iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purposes of conducting a criminal investigation, prosecution or executing a criminal penalty for offences, referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years. This paragraph is without prejudice to the provisions in Article 9 of the GDPR for the processing of biometric data for purposes other than law enforcement;

(da) the placing on the market, putting into service for this specific purpose, or use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person to commit a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; This prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity;

(db) the placing on the market, putting into service for this specific purpose, or use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;

(dc) the placing on the market, putting into service for this specific purpose, or use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions except in cases where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;

(iiid) deleted.

1a. This Article shall not affect the prohibitions that apply where an artificial intelligence practice infringes other Union law.

2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall only be deployed for the purposes under paragraph 1, point d) to confirm the specifically targeted individual’s identity and it shall take into account the following elements:

(a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system;

(b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with national legislations authorizing the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall only be authorised if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the database according to Article 49. However, in duly justified cases of urgency, the use of the system may be commenced without the registration, provided that the registration is completed without undue delay.

3. As regards paragraphs 1, point (d) and 2, each use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation provided that, such authorisation shall be requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, its use shall be stopped with immediate effect and all the data, as well as the results and outputs of this use shall be immediately discarded and deleted. The competent judicial authority or an independent administrative authority whose decision is binding shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request and, in particular, remains limited to what is strictly necessary concerning the period of time as well as geographic and personal scope. In deciding on the request, the competent judicial authority or an independent administrative authority whose decision is binding shall take into account the elements referred to in paragraph 2. It shall be ensured that no decision that produces an adverse legal effect on a person may be taken by the judicial authority or an independent administrative authority whose decision is binding solely based on the output of the remote biometric identification system.

3a. Without prejudice to paragraph 3, each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for law enforcement purposes shall be notified to the relevant market surveillance authority and the national data protection authority in accordance with the national rules referred to in paragraph 4. The notification shall as a minimum contain the information specified under paragraph 5 and shall not include sensitive operational data.

4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. Member States concerned shall lay down in their national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision and reporting relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement. Member States shall notify those rules to the Commission at the latest 30 days following the adoption thereof. Member States may introduce, in accordance with Union law, more restrictive laws on the use of remote biometric identification systems.

5. National market surveillance authorities and the national data protection authorities of Member States that have been notified of the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes pursuant to paragraph 3a shall submit to the Commission annual reports on such use. For that purpose, the Commission shall provide Member States and national market surveillance and data protection authorities with a template, including information on the number of the decisions taken by competent judicial authorities or an independent administrative authority whose decision is binding upon requests for authorisations in accordance with paragraph 3 and their result.

6. The Commission shall publish annual reports on the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes based on aggregated data in Member States based on the annual reports referred to in paragraph 5, which shall not include sensitive operational data of the related law enforcement activities.

Article 6 deals with High-risk systems and states that an AI system shall be considered as high-risk where:

The AI system is intended to be used as a safety component of a product, or the AI system is itself a product, andthe product is required to undergo a third party conformity assessment, in both cases where the product falls into the subject matter of Union harmonisation legislation indicated in Annex I. Recital 50 states that:

“As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation legislation listed in Annex I, it is appropriate to classify them as high-risk under this Regulation if the product in question undergoes the conformity assessment procedure with a third party conformity assessment body pursuant to that relevant Union harmonisation legislation. In particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices.”

Article 6 states:

1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled:

(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;

(b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third party conformity assessment, with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.

2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk.

2a. By derogation from paragraph 2 AI systems shall not be considered as high risk if they do not pose a significant risk of harm, to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. This shall be the case if one or more of the following criteria are fulfilled:

(a) the AI system is intended to perform a narrow procedural task;

(b) the AI system is intended to improve the result of a previously completed human activity;

(c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or

(d) the AI system is intended to perform a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III. Notwithstanding first subparagraph of this paragraph, an AI system shall always be considered high-risk if the AI system performs profiling of natural persons.

2b. A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(1a). Upon request of national competent authorities, the provider shall provide the documentation of the assessment.

2c. The Commission shall, after consulting the AI Board, and no later than [18 months] after the entry into force of this Regulation, provide guidelines specifying the practical implementation of this article completed by a comprehensive list of practical examples of high risk and non-high risk use cases on AI systems pursuant to Article 82b.

2d. The Commission is empowered to adopt delegated acts in accordance with Article 97 to amend the criteria laid down in points a) to d) of the first subparagraph of paragraph 2a. The Commission may adopt delegated acts adding new criteria to those laid down in points a) to d) of the first subparagraph of paragraph 2a, or modifying them, only where there is concrete and reliable evidence of the existence of AI systems that fall under the scope of Annex III but that do not pose a significant risk of harm to the health, safety and fundamental rights. The Commission shall adopt delegated acts deleting any of the criteria laid down in the first subparagraph of paragraph 2a where there is concrete and reliable evidence that this is necessary for the purpose of maintaining the level of protection of health, safety and fundamental rights in the Union. Any amendment to the criteria laid down in points a) to d) set out in the first subparagraph of paragraph 2a shall not decrease the overall level of protection of health, safety and fundamental rights in the Union. When adopting the delegated acts, the Commission shall ensure consistency with the delegated acts adopted pursuant to Article 7(1) and shall take account of market and technological developments.

An AI system will not be considered as high risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons including by not materially influencing the outcome of decision making.[126]This will be the case if one or more of the following criteria are fulfilled:

Systems that satisfy any of those criteria will be classified as Limited Risk under the Regulation and will still be subject to the requirements of transparency[128] and the obligations pursuant to Article 1 on Health, Safety and Fundamental Rights. The Fundamental Rights Impact Assessment for High-Risk systems pursuant to Article 27 should also be noted.[129]

Article 27 states:

1. Prior to deploying a high-risk AI system as defined in Article 6(2) into use, with the exception of AI systems intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by public law or private operators providing public services and operators deploying high-risk systems referred to in Annex III, point 5, b) and d) shall perform an assessment of the impact on fundamental rights that the use of the system may produce. For that purpose, deployers shall perform an assessment consisting of:

a) a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;

b) a description of the period of time and frequency in which each high-risk AI system is intended to be used;

c) the categories of natural persons and groups likely to be affected by its use in the specific context;

d) the specific risks of harm likely to impact the categories of persons or group of persons identified pursuant point (c), taking into account the information given by the provider pursuant to Article 13;

e) a description of the implementation of human oversight measures, according to the instructions of use;

f) the measures to be taken in case of the materialization of these risks, including their arrangements for internal governance and complaint mechanisms.

2. The obligation laid down in paragraph 1 applies to the first use of the high-risk AI system. The deployer may, in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by provider. If, during the use of the high-risk AI system, the deployer considers that any of the factors listed in paragraph 1change are or no longer up to date, the deployer will take the necessary steps to update the information.

3. Once the impact assessment has been performed, the deployer shall notify the market surveillance authority of the results of the assessment, submitting the filled template referred to in paragraph 5 as a part of the notification. In the case referred to in Article 46(1), deployers may be exempted from these obligations.

4. If any of the obligations laid down in this article are already met through the data protection impact assessment conducted pursuant to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the fundamental rights impact assessment referred to in paragraph 1 shall be conducted in conjunction with that data protection impact assessment.

5. The AI Office shall develop a template for a questionnaire, including through an automated tool, to facilitate users to implement the obligations of this Article in a simplified manner.

An AI system shall always be considered high risk if the AI system performs profiling of natural persons.[130] Annex III sets out a list of High Risk systems referenced in Article 6(2) and these are here set out again for ease of reader:

High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas:

1. Biometrics, insofar as their use is permitted under relevant Union or national law:

(a) Remote biometric identification systems. This shall not include AI systems intended to be used for biometric verification whose sole purpose is to confirm that a specific natural person is the person he or she claims to be;

(aa) AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics;

(ab) AI systems intended to be used for emotion recognition.

2. Critical infrastructure:

(a) AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity.

3. Education and vocational training:

(a) AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels;

(b) AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels;

(ba) AI systems intended to be used for the purpose of assessing the appropriate level of education that individual will receive or will be able to access, in the context of/within education and vocational training institution;

(bb) AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of/within education and vocational training institutions.

4. Employment, workers management and access to self-employment:

(a) AI systems intended to be used for recruitment or selection of natural persons, notably to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;

(b) AI intended to be used to make decisions affecting terms of the work related relationships, promotion and termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics and to monitor and evaluate performance and behaviour of persons in such relationships.

5. Access to and enjoyment of essential private services and essential public services and benefits:

(a) AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services;

(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score , with the exception of AI systems used for the purpose of detecting financial fraud;

(c) AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems;

(ca) AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance.

6. Law enforcement, insofar as their use is permitted under relevant Union or national law:

(a) AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, agencies, offices or bodies in support of law enforcement authorities or on their behalf to assess the risk of a natural person to become a victim of criminal offences;

(b) AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies and agencies in support of Law enforcement authorities as polygraphs and similar tools;

(c) [deleted;

(d) AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, agencies, offices or bodies in support of law enforcement authorities to evaluate the reliability of evidence in the course of investigation or prosecution of criminal offences;

(e) AI systems intended to be used by law enforcement authorities or on their behalf or by Union institutions, agencies, offices or bodies in support of law enforcement authorities for assessing the risk of a natural person of offending or re-offending not solely based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or to assess personality traits and characteristics or past criminal behaviour of natural persons or groups;

(f) AI systems intended to be used by or on behalf of law enforcement authorities or by Union agencies institutions, agencies, offices or bodies in support of law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;

(g) [deleted].

7. Migration, asylum and border control management, insofar as their use is permitted under relevant Union or national law:

(a) AI systems intended to be used by competent public authorities as polygraphs and similar tools;

(b) AI systems intended to be used by or on behalf of competent public authorities or by Union agencies, offices or bodies to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;

(c) [deleted];

(d) AI systems intended to be used by or on behalf of competent public authorities or by Union agencies, offices or bodies to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status, including related assessment of the reliability of evidence;

(da) AI systems intended to be used by or on behalf of competent public authorities, including Union agencies, offices or bodies, in the context of migration, asylum and border control management, for the purpose of detecting, recognising or identifying natural persons with the exception of verification of travel documents.

8. Administration of justice and democratic processes:

(a) AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts or used in a similar way in alternative dispute resolution;

(aa) AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems whose output natural persons are not directly exposed to, such as tools used to organise, optimise and structure political campaigns from an administrative and logistic point of view;

(#) [deleted].

Article 6(2b) states:

“A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 51(1a). Upon request of national competent authorities, the provider shall provide the documentation of the assessment.”

This effectively means an AI system within the scope of Annex III and considered for the market in Europe requires an AI Impact Assessment and that that assessment shall be made available to the competent national authority on their request.  

So, to recap, an AI system is High-risk where it concerns the subject matter of Union harmonisation measures, such as toys, lifts, pressure equipment, recreational craft equipment, cableway installations, where it is required to undergo a third party conformity assessment, and where the AI system itself is intended to be used as either a product itself, or as a component of a product – that falls within the Union harmonisation measures mentioned.[131] An AI system is also High-risk where it concerns any of the subject matters set out in Annex III, set out earlier in this chapter.[132]  However, there is a derogation from this position where such a system prima facie caught by Annex III does not pose a significant risk of harm, to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision marking, where one or more specified criteria are satisfied, set out earlier.[133]Satisfying any of those criteria means the AI system in question can be considered Limited Risk and consequently will be subject to lesser requirements.[134]  Finally, it’s worth noting that before any of these considerations are in contemplation the deployer should consider whether the AI system in question falls into the category of unacceptable risk.[135] Where it does then it is prohibited under the Regulation.  

Article 7 provides that the Commission is empowered to act in accordance with its powers pursuant to Article 97 to amend Annex III by adding or modifying use cases of high-risk systems. Note the terms “adding or modifying” – there is no power to delete an item already in the agreed version of the Annex III.  A number of changes were made to Annex III in its final draft version and these have been indicated in the footnotes above. 

Article 7 states:

1. The Commission is empowered to adopt delegated acts in accordance with Article 97 to amend Annex III by adding or modifying use cases of high-risk AI systems where both of the following conditions are fulfilled:

(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III;

(b) the AI systems pose a risk of harm to health and safety, or an adverse impact on fundamental rights, and that risk is equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.

2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria:

(a) the intended purpose of the AI system;

(b) the extent to which an AI system has been used or is likely to be used;

(ba) the nature and amount of the data processed and used by the AI system, in particular whether special categories of personal data are processed;

(bb) the extent to which the AI system acts autonomously and the possibility for a human to override a decision or recommendations that may lead to potential harm;

(c) the extent to which the use of an AI system has already caused harm to health and safety, has had an adverse impact on fundamental rights or has given rise to significant concerns in relation to the likelihood of such harm or adverse impact, as demonstrated for example by reports or documented allegations submitted to national competent authorities or by other reports, as appropriate;

(d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons or to disproportionately affect a particular group of persons;

(e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;

(f) the extent to which there is an imbalance of power, or the potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to status, authority, knowledge, economic or social circumstances, or age;

(g) the extent to which the outcome produced involving an AI system is easily corrigible or reversible, taking into account the technical solutions available to correct or reverse, whereby outcomes having and adverse impact on health, safety, fundamental rights, shall not be considered as easily corrigible or reversible;

(gb) the magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or society at large, including possible improvements in product safety;

(h) the extent to which existing Union legislation provides for:

(i) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages;

(ii) effective measures to prevent or substantially minimise those risks.

2a. The Commission is empowered to adopt delegated acts in accordance with Article 97 to amend the list in Annex III by removing high-risk AI systems where both of the following conditions are fulfilled:

(a) the high-risk AI system(s) concerned no longer pose any significant risks to fundamental rights, health or safety, taking into account the criteria listed in paragraph 2;

(b) the deletion does not decrease the overall level of protection of health, safety and fundamental rights under Union law.

Article 97 states:

1. The power to adopt delegated acts is conferred on the Commission subject to the conditions laid down in this Article.

2. The power to adopt delegated acts referred to in [Article 4Article 7(1), Article 11(3), Article 43(5) and (6) and Article 47(5)] shall be conferred on the Commission for a period of five years from … [the date of entry into force of the Regulation].The Commission shall draw up a report in respect of the delegation of power not later than 9 months before the end of the five-year period. The delegation of power shall be tacitly extended for periods of an identical duration, unless the European Parliament or the Council opposes such extension not later than three months before the end of each period.

3. The delegation of power referred to in {Article 7(1), Article 7(3), Article 11(3), Article 43(5) and (6) and Article 47(5)] may be revoked at any time by the European Parliament or by the Council. A decision of revocation shall put an end to the delegation of power specified in that decision. It shall take effect the day following that of its publication in the Official Journal of the European Union or at a later date specified therein. It shall not affect the validity of any delegated acts already in force.

4. As soon as it adopts a delegated act, the Commission shall notify it simultaneously to the European Parliament and to the Council.

5. Any delegated act adopted pursuant to [Article 4], Article 7(1), Article 11(3), Article 43(5) and (6) and Article 47(5) shall enter into force only if no objection has been expressed by either the European Parliament or the Council within a period of three months of notification of that act to the European Parliament and the Council or if, before the expiry of that period, the European Parliament and the Council have both informed the Commission that they will not object. That period shall be extended by three months at the initiative of the European Parliament or of the Council.

The remainder of Title III deals with other aspects of High-risk AI systems. Article 8 sets out compliance obligations for providers of High-risk AI systems.

Article 8 states:

1. High-risk AI systems shall comply with the requirements established in this Section, taking into account its intended purpose as well as the generally acknowledged state of the art on AI and AI related technologies. The risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.

2.

2a. Where a product contains an artificial intelligence system, to which the requirements of this Regulation as well as requirements of the Union harmonisation legislation listed in Annex I, Section A apply, providers shall be responsible for ensuring that their product is fully compliant with all applicable requirements required under the Union harmonisation legislation. In ensuring the compliance of high-risk AI systems referred in paragraph 1 with the requirements set out in Section 2 of this Title, and in order to ensure consistency, avoid duplications and minimise additional burdens, providers shall have a choice to integrate, as appropriate, the necessary testing and reporting processes, information and documentation they provide with regard to their product into already existing documentation and procedures required under the Union harmonisation legislation listed in Annex I, Section A.

 Article 9 sets down a requirement for a risk management system:

1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.

2. The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps:

(a) identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to the health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose;

(b) estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse;

(c) evaluation of other possibly arising risks based on the analysis of data gathered from the post-market monitoring system referred to in Article 72;

(d) adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to point a of this paragraph in accordance with the provisions of the following paragraphs. (…)

 Article 10 deals with data and governance issues and refers to the training of models with data stating that “training, validation and testing of datasets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose”. 

Article 10 states:

1. High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 whenever such datasets are used.

2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices appropriate for the intended purpose of the AI system. Those practices shall concern in particular:

(a) the relevant design choices;

(aa) data collection processes and origin of data, and in the case of personal data, the original purpose of data collection;

(b) [deleted];

(c) relevant data preparation processing operations, such as annotation, labelling, cleaning, updating, enrichment and aggregation;

(d) the formulation of assumptions, notably with respect to the information that the data are supposed to measure and represent;

(e) an assessment of the availability, quantity and suitability of the data sets that are needed;

(f) examination in view of possible biases that are likely to affect the health and safety of persons, negatively impact fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations;

(fa) appropriate measures to detect, prevent and mitigate possible biases identified according to point f;

(g) the identification of relevant data gaps or shortcomings that prevent compliance with this Regulation, and how those gaps and shortcomings can be addressed. (…)

Article 11 sets down the requirement for technical documentation and states that “the technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date. The technical documentation shall be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Section and provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements.” Article 12 provides a requirement for record-keeping and states that  “High-risk AI systems shall technically allow for the automatic recording of events (‘logs’) over the duration of the lifetime of the system”. Article 13 deals with the transparency and provision of information to deployers. This includes that High-risk systems “be accompanied by instructions for use in an appropriate digital format.”[136]

Article 13 states:

“1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable deployers to interpret the system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and deployer set out in Chapter 3 of this Title.

2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to users.

3. The instructions for use shall contain at least the following information:

(a) the identity and the contact details of the provider and, where applicable, of its authorised representative;

(b) the characteristics, capabilities and limitations of performance of the high-risk AI system, including:

(i) its intended purpose;

(ii) the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity;

(iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights referred to in Article 9(2);

(iiia) where applicable, the technical capabilities and characteristics of the AI system to provide information that is relevant to explain its output;

(iv) when appropriate, its performance regarding specific persons or groups of persons on which the system is intended to be used;

(v) when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the AI system;

(va) where applicable, information to enable deployers to interpret the system’s output and use it appropriately;

(c) the changes to the high-risk AI system and its performance which have been predetermined by the provider at the moment of the initial conformity assessment, if any;

(d) the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the deployers;

(e) the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates;

(ea) where relevant, a description of the mechanisms included within the AI system that allows users to properly collect, store and interpret the logs in accordance with Article 12.

 Article 14 refers to human oversight. It states:

“1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.

2. Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter.

3. The oversight measures shall be commensurate to the risks, level of autonomy and context of use of the AI system and shall be ensured through either one or all of the following types of measures:

(a) measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service;

(b) measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the user.

4. For the purpose of implementing paragraphs 1 to 3, the high-risk AI system shall be provided to the user in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate to the circumstances:

(a) to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, also in view of detecting and addressing anomalies, dysfunctions and unexpected performance;

(b) to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;

(c) to correctly interpret the high-risk AI system’s output, taking into account for example the interpretation tools and methods available;

(d) to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system;

(e) to intervene on the operation of the high-risk AI system or interrupt, the system through a “stop” button or a similar procedure that allows the system to come to a halt in a safe state.

5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless this has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority. The requirement for a separate verification by at least two natural persons shall not apply to high risk AI systems used for the purpose of law enforcement, migration, border control or asylum, in cases where Union or national law considers the application of this requirement to be disproportionate.”

Article 15 sets down the obligation for accuracy, robustness and cybersecurity. 

Article 15 states:

“1. High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and perform consistently in those respects throughout their lifecycle.

1a. To address the technical aspects of how to measure the appropriate levels of accuracy and robustness set out in paragraph 1 of this Article and any other relevant performance metrics, the Commission shall, in cooperation with relevant stakeholder and organisations such as metrology and benchmarking authorities, encourage as appropriate, the development of benchmarks and measurement methodologies.

2. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use.

3. High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken towards this regard. The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans. High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (‘feedback loops’) are duly addressed with appropriate mitigation measures.

4. High-risk AI systems shall be resilient as regards to attempts by unauthorised third parties to alter their use, outputs or performance by exploiting the system vulnerabilities. The technical solutions aimed at ensuring the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks. The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training dataset (‘data poisoning’), or pre-trained components used in training (‘model poisoning’) , inputs designed to cause the model to make a mistake (‘adversarial examples’ or ‘model evasion’), confidentiality attacks or model flaws.”

Article 16 deals with the Obligations of Providers of High-Risk AI Systems. 

Article 16 states:

Providers of high-risk AI systems shall:

(a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title;

(aa) indicate their name, registered trade name or registered trade mark, the address at which they can be contacted on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as applicable;

(b) have a quality management system in place which complies with Article 17;

(c) keep the documentation referred to in Article 18;

(d) when under their control, keep the logs automatically generated by their high-risk AI systems as referred to in Article 20;

(e) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure as referred to in Article 43, prior to its placing on the market or putting into service;

(ea) draw up an EU declaration of conformity in accordance with Article 48;

(eb) affix the CE marking to the high-risk AI system to indicate conformity with this Regulation, in accordance with Article 49;

(f) comply with the registration obligations referred to in Article 51(1);

(g) take the necessary corrective actions and provide information as required in Article 21;

(i) Moved above in line 313b;

(j) upon a reasoned request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title;

(ja) ensure that the high-risk AI system complies with accessibility requirements, in accordance with Directive 2019/882 on accessibility requirements for products and services and Directive 2016/2102 on the accessibility of the websites and mobile applications of public sector bodies.

Article 17 set out obligations for a Quality Management System. Article 18 sets down a ten-year obligation for the keeping of certain records: 

“The provider shall, for a period ending 10 years after the AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities:

(a) the technical documentation referred to in Article 11;

(b) the documentation concerning the quality management system referred to in Article 17;

(c) the documentation concerning the changes approved by notified bodies where applicable;

(d) the decisions and other documents issued by the notified bodies where applicable;

(e) the EU declaration of conformity referred to in Article 47. (…)

 The remaining provisions of Title III set out further obligations on providers of AI systems: Automatically Generated Logs (Article 19- Providers of high-risk AI systems are obliged to keep the logs referred to earlier in Article 12 automatically generated), Corrective Actions and Duty of Information (Article 20), Cooperation with Competent Authorities (Article 21), Authorised Representatives (Article 22).

Obligations of Importers is dealt with in Article 23. An importer is described earlier in the Act as “any natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the Union”[137] This is distinguished from a “distributor” which means any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.[138] The concept of an “operator” embraces both importer and distributer and means “the provider, the product manufacturer, the deployer, the authorised representative, the importer or the distributor”.[139] Article 23 on obligations specific to importers states:

1. Before placing a high-risk AI system on the market, importers of such system shall ensure that such a system is in conformity with this Regulation by verifying that:

(a) the relevant conformity assessment procedure referred to in Article 43 has been carried out by the provider of that AI system;

(b) the provider has drawn up the technical documentation in accordance with Article 11 and Annex IV;

(c) the system bears the required CE conformity marking and is accompanied by the EU declaration of conformity and instructions of use;

(ca) the provider has appointed an authorised representative in accordance with Article 22(1).

2. Where an importer has sufficient reason to consider that a high-risk AI system is not in conformity with this Regulation, or is falsified, or accompanied by falsified documentation, it shall not place that system on the market until that AI system has been brought into conformity. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the importer shall inform the provider of the AI system , the authorised representatives and the market surveillance authorities to that effect.

3. Importers shall indicate their name, registered trade name or registered trade mark, and the address at which they can be contacted on the high-risk AI system and on its packaging or its accompanying documentation, where applicable.

4. Importers shall ensure that, while a high-risk AI system is under their responsibility, where applicable, storage or transport conditions do not jeopardise its compliance with the requirements set out in Section 2 of this Title.

4a. Importers shall keep, for a period ending 10 years after the AI system has been placed on the market or put into service, a copy of the certificate issued by the notified body, where applicable, of the instructions for use and of the EU declaration of conformity.

5. Importers shall provide national competent authorities, upon a reasoned request, with all the necessary information and documentation including that kept in accordance with paragraph 4a to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2 of this Chapter in a language which can be easily understood by them. To this purpose they shall also ensure that the technical documentation can be made available to those authorities.

5a. Importers shall cooperate with national competent authorities on any action those authorities take, in particular to reduce and mitigate the risks posed by the high-risk AI system.

The subsequent article deals with obligations on distributors. Article 24 states:

1. Before making a high-risk AI system available on the market, distributors shall verify that the high-risk AI system bears the required CE conformity marking, that it is accompanied by a copy of EU declaration of conformity and instruction of use, and that the provider and the importer of the system, as applicable, have complied with their obligations set out in Article 16, point (aa) and (b) and 26(3) respectively.

2. Where a distributor considers or has reason to consider, on the basis of the information in its possession, that a high-risk AI system is not in conformity with the requirements set out in Section 2 of this Title, it shall not make the high-risk AI system available on the market until that system has been brought into conformity with those requirements. Furthermore, where the system presents a risk within the meaning of Article 79(1), the distributor shall inform the provider or the importer of the system, as applicable, to that effect.

3. Distributors shall ensure that, while a high-risk AI system is under their responsibility, where applicable, storage or transport conditions do not jeopardise the compliance of the system with the requirements set out in Section 2 of this Title.

4. A distributor that considers or has reason to consider, on the basis of the information in its possession, that a high-risk AI system which it has made available on the market is not in conformity with the requirements set out in Section 2 of this Chapter shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall immediately inform the provider or importer of the system and the national competent authorities of the Member States in which it has made the product available to that effect, giving details, in particular, of the non-compliance and of any corrective actions taken.

5. Upon a reasoned request from a national competent authority, distributors of the high- risk AI system shall provide that authority with all the information and documentation regarding its activities as described in paragraph 1 to 4 necessary to demonstrate the conformity of a high-risk system with the requirements set out in Section 2 of this Title.

5a. Distributors shall cooperate with national competent authorities on any action those authorities take in relation to an AI system, of which they are the distributor, in particular to reduce or mitigate the risk posed by the high-risk AI system.

Responsibilities Along the AI Value Chain (Article 25) refers to obligations which fall under the Regulations on any entity that puts their name or trademark on a high-risk AI system already placed on the market or put into service, without prejudice to contractual arrangements stipulating that the obligations are otherwise allocated; or any entity that makes a substantial modification to a high-risk AI system that has already been placed on the market or has already been put into service and in a way that it remains a high-risk AI system in accordance with Article 6; or where they modify the intended purpose of an AI system, including a general purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service in such manner that the AI system becomes a high risk AI system in accordance with Article 6. 

Obligations of Deployers of High-Risk AI Systems is dealt with under Article 26. A deployer is defined in Article 3 as being: “any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity”.  Obligations on deployers in Article 26 include: assigning human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support; and taking appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions of use accompanying the systems.

A Fundamental Rights Impact Assessment for High-Risk AI Systems is required pursuant to Article 27 where deployers of relevant systems shall perform an assessment of the impact on fundamental right that the use of the system may produce and shall include “the categories of natural persons and groups likely to be affected by its use” and the “specific risks of harm likely to impact” the categories of persons or groups already referred to. 

Article 27 states:

1. Prior to deploying a high-risk AI system as defined in Article 6(2) into use, with the exception of AI systems intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by public law or private operators providing public services and operators deploying high-risk systems referred to in Annex III, point 5, b) and d) shall perform an assessment of the impact on fundamental rights that the use of the system may produce. For that purpose, deployers shall perform an assessment consisting of:

a) a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;

b) a description of the period of time and frequency in which each high-risk AI system is intended to be used;

c) the categories of natural persons and groups likely to be affected by its use in the specific context;

d) the specific risks of harm likely to impact the categories of persons or group of persons identified pursuant point (c), taking into account the information given by the provider pursuant to Article 13;

e) a description of the implementation of human oversight measures, according to the instructions of use;

f) the measures to be taken in case of the materialization of these risks, including their arrangements for internal governance and complaint mechanisms.

2. The obligation laid down in paragraph 1 applies to the first use of the high-risk AI system. The deployer may, in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by provider. If, during the use of the high-risk AI system, the deployer considers that any of the factors listed in paragraph 1change are or no longer up to date, the deployer will take the necessary steps to update the information.

3. Once the impact assessment has been performed, the deployer shall notify the market surveillance authority of the results of the assessment, submitting the filled template referred to in paragraph 5 as a part of the notification. In the case referred to in Article 47(1), deployers may be exempted from these obligations.

4. If any of the obligations laid down in this article are already met through the data protection impact assessment conducted pursuant to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the fundamental rights impact assessment referred to in paragraph 1 shall be conducted in conjunction with that data protection impact assessment.

5. The AI Office shall develop a template for a questionnaire, including through an automated tool, to facilitate users to implement the obligations of this Article in a simplified manner.

Section 4 of Title III sets down obligations with regard to Notifying Authorities and Notified Bodies and includes Article 28 to Article 39: Article 28 concerns the Member State obligation to set up a Notifying Authority: “Each Member State shall designate or establish at least one notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring.” Article 28 states:

1. Each Member State shall designate or establish at least one notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring. These procedures shall be developed in cooperation between the notifying authorities of all Member States.

2. Member States may decide that the assessment and monitoring referred to in paragraph 1 shall be carried out by a national accreditation body within the meaning of and in accordance with Regulation (EC) No 765/2008.

3. Notifying authorities shall be established, organised and operated in such a way that no conflict of interest arises with conformity assessment bodies and the objectivity and impartiality of their activities are safeguarded.

4. Notifying authorities shall be organised in such a way that decisions relating to the notification of conformity assessment bodies are taken by competent persons different from those who carried out the assessment of those bodies.

5. Notifying authorities shall not offer or provide any activities that conformity assessment bodies perform or any consultancy services on a commercial or competitive basis.

6. Notifying authorities shall safeguard the confidentiality of the information they obtain in accordance with Article 78.

7. Notifying authorities shall have an adequate number of competent personnel at their disposal for the proper performance of their tasks. Competent personnel shall have the necessary expertise, where applicable, for their function, in fields such as information technologies, artificial intelligence and law, including the supervision of fundamental rights.

Article 29 deals with the setting up within member states of Conformity Assessment Bodies. Requirements for these Bodies are set down in Article 31:

1. A notified body shall be established under national law of a Member State and have legal personality.

2. Notified bodies shall satisfy the organisational, quality management, resources and process requirements that are necessary to fulfil their tasks, as well as suitable cybersecurity requirements.

3. The organisational structure, allocation of responsibilities, reporting lines and operation of notified bodies shall be such as to ensure that there is confidence in the performance by and in the results of the conformity assessment activities that the notified bodies conduct.

4. Notified bodies shall be independent of the provider of a high-risk AI system in relation to which it performs conformity assessment activities. Notified bodies shall also be independent of any other operator having an economic interest in the high-risk AI system that is assessed, as well as of any competitors of the provider. This shall not preclude the use of assessed AI systems that are necessary for the operations of the conformity assessment body or the use of such systems for personal purposes.

4a. A conformity assessment body, its top-level management and the personnel responsible for carrying out the conformity assessment tasks shall not be directly involved in the design, development, marketing or use of high-risk AI systems, or represent the parties engaged in those activities. They shall not engage in any activity that may conflict with their independence of judgement or integrity in relation to conformity assessment activities for which they are notified. This shall in particular apply to consultancy services.

5. Notified bodies shall be organised and operated so as to safeguard the independence, objectivity and impartiality of their activities. Notified bodies shall document and implement a structure and procedures to safeguard impartiality and to promote and apply the principles of impartiality throughout their organisation, personnel and assessment activities.

6. Notified bodies shall have documented procedures in place ensuring that their personnel, committees, subsidiaries, subcontractors and any associated body or personnel of external bodies respect the confidentiality of the information in accordance with Article 78 which comes into their possession during the performance of conformity assessment activities, except when disclosure is required by law. The staff of notified bodies shall be bound to observe professional secrecy with regard to all information obtained in carrying out their tasks under this Regulation, except in relation to the notifying authorities of the Member State in which their activities are carried out.

7. Notified bodies shall have procedures for the performance of activities which take due account of the size of an undertaking, the sector in which it operates, its structure, the degree of complexity of the AI system in question.

8. Notified bodies shall take out appropriate liability insurance for their conformity assessment activities, unless liability is assumed by the Member State in which they are established in accordance with national law or that Member State is itself directly responsible for the conformity assessment.

9. Notified bodies shall be capable of carrying out all the tasks falling to them under this Regulation with the highest degree of professional integrity and the requisite competence in the specific field, whether those tasks are carried out by notified bodies themselves or on their behalf and under their responsibility.

10. Notified bodies shall have sufficient internal competences to be able to effectively evaluate the tasks conducted by external parties on their behalf. The notified body shall have permanent availability of sufficient administrative, technical, legal and scientific personnel who possess experience and knowledge relating to the relevant types of artificial intelligence systems, data and data computing and to the requirements set out in Section 2 of this Title.

11. Notified bodies shall participate in coordination activities as referred to in Article 38. They shall also take part directly or be represented in European standardisation organisations, or ensure that they are aware and up to date in respect of relevant standards.

Article 30 concerns the Notification Procedure to the Commission and other Member States by a notifying authority when the requirements in Article 31 have been satisfied by a relevant Notified Body. Article 32 sets out a Presumption of Conformity with Requirements Relating to Notified Bodies after its name has been published in the Official Journal. Subcontracting by the Notified Body is dealt with in Article 33 and places on obligation on the Notified Body to ensure the subcontractor complies with the obligations in Article 31:

1. Where a notified body subcontracts specific tasks connected with the conformity assessment or has recourse to a subsidiary, it shall ensure that the subcontractor or the subsidiary meets the requirements laid down in Article 31 and shall inform the notifying authority accordingly.

2. Notified bodies shall take full responsibility for the tasks performed by subcontractors or subsidiaries wherever these are established.

3. Activities may be subcontracted or carried out by a subsidiary only with the agreement of the provider. Notified bodies shall make a list of their subsidiaries publicly available.

4. The relevant documents concerning the assessment of the qualifications of the subcontractor or the subsidiary and the work carried out by them under this Regulation shall be kept at the disposal of the notifying authority for a period of 5 years from the termination date of the subcontracting activity.

Notified Bodies have an obligation, as per Article 34, to comply with the conformity assessment procedures set down in Article 43. Article 43 refers to the Conformity Assessment Procedures in Annex VI and Annex VII. Annex VI states that:

1. The conformity assessment procedure based on internal control is the conformity assessment procedure based on points 2 to 4.

2. The provider verifies that the established quality management system is in compliance with the requirements of Article 17.

3. The provider examines the information contained in the technical documentation in order to assess the compliance of the AI system with the relevant essential requirements set out in Chapter III, Section 2.

4. The provider also verifies that the design and development process of the AI system and its post-market monitoring as referred to in Article 72 is consistent with the technical documentation.

Annex VII deals with “Introduction Conformity” and includes an approved quality management system and an assessment of technical documentation. Surveillance, by the notified body, of the approved quality management system is also contemplated. 

Article 35 deals with Identification Numbers and Lists of Notified Bodies Designated Under this Regulation; Article 36 concerns Changes to Notifications; Article 37 envisages Challenges to the Competence of Notified Bodies and states that “the Commission shall, where necessary, investigate all cases where there are reasons to doubt the competence of a notified body or the continued fulfilment by a notified body of the requirements laid down in Article 31 and their applicable responsibilities.” Article 38 deals with the Coordination of Notified Bodies and requests the Commission to put in place measures to ensure appropriate coordination and cooperation. Conformity Assessment Bodies of Third Countries are permitted pursuant to Article 39 and these must also comply with the requirements in Article 31.

Section 5 deals with Standards, Conformity Assessment, Certificates, Registration and includes provisions on Harmonised Standards and Standardisation Deliverables (Article 40), Common Specifications (Article 41) and Presumption of Conformity with Certain Requirements (Article 42), and Article 43: Conformity Assessment, Article 44: Certificates; Article 45: Information Obligations of Notified Bodies, Article 47 envisages a Derogation from Conformity Assessment Procedure and states that: “By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection and the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period of time while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation. The completion of those procedures shall be undertaken without undue delay.”  Article 47 deals with EU Declaration of Conformity, and Article 48 covers: CE Marking. Finally in this Section Article 49 concerns an obligation on the provider of a High Risk system to register in the EU database set out in Article 71. 

Article 50 concerns transparency obligations for providers and users of certain AI Systems and GPAI Models. It states:

“Providers shall ensure that AI systems intended to directly interact with natural persons are designed and developed in such a way that the concerned natural persons are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.”

The Article also points to the obligation on providers of systems that generate synthetic audio, image, video or text content, to ensure “the outputs of the AI system are marked in a machine readable format.” This is a transparency requirement. See chapter 4 on the definition of this term in the United States of America[140] and see further down for discussion of Deep Fakes.

Article 51 deals with classification of General-Purpose AI Models as General Purpose AI Models with Systemic Risk and these are mentioned elsewhere. The procedure around classification of these models is treated in Article 52. Section 2 of Chapter V sets down obligations for Providers of General Purpose AI models (Article 53) and obligations for providers of General Purpose AI models presenting Systemic Risk (Article 55). Codes of Practice are dealt with in Article 56 and places an obligation on the AI Office to encourage and facilitate the drawing up of such codes at Union level. The role and function of the AI Office is dealt with below under Governance. 

On July 10 2025 a code of practice for providers of large language models (“General Purpose AI”) was published by the European Commission.[1] It is a voluntary tool helping relevant stakeholders to comply with the various provisions of the EU AI Act. It was prepared by independent experts. Following its publication the Code of Practice went through a phase of review to assess its adequacy. The Code was set to be complemented by Commission guidelines on key concepts.

The Code contains three chapters on: transparency, copyright and safety and security. As regards transparency the code offers a method for providers to comply with Article 53 EU AI Act. On safety and security this chapter is relevant only to those providers that fall within the ambit of Article 55 EU AI Act – those providers of General Purpose AI-models that present systemic risk. The chapter on copyright seeks the adoption of technical measures to prevent outputs which reproduce without permission any copyrighted content.   


[1] https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai

Chapter VI contains provisions on measures in support of innovation. This deals with the issue of Regulatory Sandboxes, the definition for which has already been set out, and permits testing under controlled and supervised conditions. Article 57: deals with AI Regulatory Sandboxes and states:

“Member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at national level, which shall be operational 24 months after entry into force. This sandbox may also be established jointly with one or several other Member States’ competent authorities. The Commission may provide technical support, advice and tools for the establishment and operation of AI regulatory sandboxes. The obligation established in previous paragraph can also be fulfilled by participation in an existing sandbox insofar as this participation provides equivalent level of national coverage for the participating Member States.” (…)

Article 58 looks at more detailed arrangements for the Regulatory Sandbox. Further Processing of Personal Data for Developing Certain AI Systems in the Public Interest in the AI Regulatory Sandbox is addressed in Article 59.  Article 60 concerns Testing of High-Risk AI Systems in Real World Conditions Outside AI Regulatory Sandboxes, and, states that Providers must satisfy inter alia the following:

(a) the provider or prospective provider has drawn up a real world testing plan and submitted it to the market surveillance authority in the Member State(s) where the testing in real world conditions is to be conducted;

(b) the market surveillance authority in the Member State(s) where the testing in real world conditions is to be conducted has approved the testing in real world conditions and the real world testing plan. Where the market surveillance authority in that Member State has not provided with an answer in 30 days, the testing in real world conditions and the real world testing plan shall be understood as approved. In cases where national law does not foresee a tacit approval, the testing in real world conditions shall be subject to an authorisation;

(c) the provider or prospective provider with the exception of high-risk AI systems referred to in Annex III, points 1, 6 and 7 in the areas of law enforcement, migration, asylum and border control management, and high risk AI systems referred to in Annex III point 2, has registered the testing in real world conditions in the non public part of the EU database referred to in Article 71(3) with a Union-wide unique single identification number and the information specified in Annex IX;

(d) the provider or prospective provider conducting the testing in real world conditions is established in the Union or it has appointed a legal representative who is established in the Union;

(e) Data collected and processed for the purpose of the testing in real world conditions shall only be transferred to third countries outside the Union provided appropriate and applicable safeguards under Union law are implemented;

(f) the testing in real world conditions does not last longer than necessary to achieve its objectives and in any case not longer than 6 months, which may be extended for an additional amount of 6 months, subject to prior notification by the provider to the market surveillance authority, accompanied by an explanation on the need for such time extension;

(g) persons belonging to vulnerable groups due to their age, physical or mental disability are appropriately protected;

(h) where a provider or prospective provider organises the testing in real world conditions in cooperation with one or more prospective deployers, the latter have been informed of all aspects of the testing that are relevant to their decision to participate, and given the relevant instructions on how to use the AI system referred to in Article 13; the provider or prospective provider and the deployer(s) shall conclude an agreement specifying their roles and responsibilities with a view to ensuring compliance with the provisions for testing in real world conditions under this Regulation and other applicable Union and Member States legislation;

(i) the subjects of the testing in real world conditions have given informed consent in accordance with Article 61, or in the case of law enforcement, where the seeking of informed consent would prevent the AI system from being tested, the testing itself and the outcome of the testing in the real world conditions shall not have any negative effect on the subject and his or her personal data shall be deleted after the test is performed;

(j) the testing in real world conditions is effectively overseen by the provider or prospective provider and deployer(s) with persons who are suitably qualified in the relevant field and have the necessary capacity, training and authority to perform their tasks;

(k) the predictions, recommendations or decisions of the AI system can be effectively reversed and disregarded. 5 Any subject of the testing in real world conditions, or his or her legally designated representative, as appropriate, may, without any resulting detriment and without having to provide any justification, withdraw from the testing at any time by revoking his or her informed consent and request the immediate and permanent deletion of their personal data. The withdrawal of the informed consent shall not affect the activities already carried out.

Article 61 requires informed consent by a subject prior to the testing referred to in the previous section.  Article 62 looks at providers that are SMEs including  startups.

Recital 139 states:

“The objectives of the AI regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation, to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, to facilitate regulatory learning for authorities and companies, including with a view to future adaptions of the legal framework, to support cooperation and the sharing of best practices with the authorities involved in the AI regulatory sandbox, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs), including start-ups. Regulatory sandboxes should be widely available throughout the Union, and particular attention should be given to their accessibility for SMEs, including start-ups. The participation in the AI regulatory sandbox should focus on issues that raise legal uncertainty for providers and prospective providers to innovate, experiment with AI in the Union and contribute to evidence-based regulatory learning. The supervision of the AI systems in the AI regulatory sandbox should therefore cover their development, training, testing and validation before the systems are placed on the market or put into service, as well as the notion and occurrence of substantial modification that may require a new conformity assessment procedure. Any significant risks identified during the development and testing of such AI systems should result in adequate mitigation and, failing that, in the suspension of the development and testing process Where appropriate, national competent authorities establishing AI regulatory sandboxes should cooperate with other relevant authorities, including those supervising the protection of fundamental rights,, and could allow for the involvement of other actors within the AI ecosystem such as national or European standardisation organisations, notified bodies, testing and experimentation facilities, research and experimentation labs, European Digital innovation hubs and relevant stakeholder and civil society organisations. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. AI regulatory sandboxes established under this Regulation should be without prejudice to other legislation allowing for the establishment of other sandboxes aiming at ensuring compliance with legislation other that this Regulation. Where appropriate, relevant competent authorities in charge of those other regulatory sandboxes should consider the benefits of using those sandboxes also for the purpose of ensuring compliance of AI systems with this Regulation. Upon agreement between the national competent authorities and the participants in the AI regulatory sandbox, testing in real world conditions may also be operated and supervised in the framework of the AI regulatory sandbox.”

Chapter VII is concerned with governance and refers to: the European Artificial Intelligence Board (Section 1), a Scientific Panel of Independent Experts (Section 1; Article 68) and National Competent Authorities (Section 2) 

Governance

Governance and enforcement is an important aspect of the regulation: as mentioned elsewhere criticism has been levelled at the relevant Executive Order in the United States of America on account of its absence of an effective enforcement mechanism. In Europe the EU AI Act deals with Governance principally in Chapter VII. Among its provisions there are references to the AI Office, (Article 64) AI Board, (Article 65) Advisory Forum (Article 67), Scientific Panel of Independent Experts, (Article 68) Member State National Competent Authorities and the European Data Protection Supervisor, (Article 70). These various elements will be referred to each in turn. 

Article 64 sets down the Governance impetus at Union level stating:

1. The Commission shall develop Union expertise and capabilities in the field of artificial intelligence. For this purpose, the Commission has established the European AI Office by Decision […]”.

2. Member States shall facilitate the tasks entrusted to the AI Office, as reflected in this Regulation.

Recital 148 sets down the establishment of the AI Office: 

“This Regulation should establish a governance framework that both allows to coordinate and support the application of this Regulation at national level, as well as build capabilities at Union level and integrate stakeholders in the field of artificial intelligence. The effective implementation and enforcement of this Regulation require a governance framework that allows to coordinate and build up central expertise at Union level. The Commission has established the AI Office by Commission decision of […], which has as its mission to develop Union expertise and capabilities in the field of artificial intelligence and to contribute to the implementation of Union legislation on artificial intelligence. Member States should facilitate the tasks of the AI Office with a view to support the development of Union expertise and capabilities at Union level and to strengthen the functioning of the digital single market. Furthermore, a European Artificial Intelligence Board composed of representatives of the Member States, a scientific panel to integrate the scientific community and an advisory forum to contribute stakeholder input to the implementation of this Regulation, both at national and Union level, should be established. The development of Union expertise and capabilities should also include making use of existing resources and expertise, notably through synergies with structures built up in the context of the Union level enforcement of other legislation and synergies with related initiatives at Union level, such as the EuroHPC Joint Undertaking and the AI Testing and Experimentation Facilities under the Digital Europe Programme.”

Article 56 states that the AI Office shall “encourage and facilitate the drawing up of codes of practice at Union level as an element to contribute to the proper application of this Regulation, taking into account international approaches.” That article also states that the AI Office “may invite the providers of general-purpose AI models, as well as relevant national competent authorities, to participate in the drawing up of codes of practice. Civil society organisations, industry, academia and other relevant stakeholders, such as downstream providers and independent experts, may support the process.” 

The Article states further:

“5. The AI Office may invite all providers of general-purpose AI models to participate in the codes of practice. For providers of general-purpose AI models not presenting systemic risks this participation should be limited to obligations foreseen in paragraph 2 point a) of this Article, unless they declare explicitly their interest to join the full code.

6. The AI Office shall aim to ensure that participants to the codes of practice report regularly to the AI Office on the implementation of the commitments and the measures taken and their outcomes, including as measured against the key performance indicators as appropriate. Key performance indicators and reporting commitments shall take into account differences in size and capacity between different participants.

Recital 164 also envisages that the AI Office should be able to investigate possible infringements:

“The AI Office should be able to take the necessary actions to monitor the effective implementation of and compliance with the obligations for providers of general purpose AI models laid down in this Regulation. The AI Office should be able to investigate possible infringements in accordance with the powers provided for in this Regulation, including by requesting documentation and information, by conducting evaluations, as well as by requesting measures from providers of general purpose AI models. In the conduct of evaluations, in order to make use of independent expertise, the AI Office should be able to involve independent experts to carry out the evaluations on its behalf. Compliance with the obligations should be enforceable, inter alia, through requests to take appropriate measures, including risk mitigation measures in case of identified systemic risks as well as restricting the making available on the market, withdrawing or recalling the model. As a safeguard in case needed beyond the procedural rights provided for in this Regulation, providers of general-purpose AI models should have the procedural rights provided for in Article 18 of Regulation (EU) 2019/1020, which should apply by analogy, without prejudice to more specific procedural rights provided for by this Regulation.”

Article 95 is also relevant in respect of elucidating the role of the AI Office:

“1. The AI Office, and the Member States shall encourage and facilitate the drawing up of codes of conduct, including related governance mechanisms, intended to foster the voluntary application to AI systems other than high-risk AI systems of some or all of the requirements set out in Title III, Chapter 2 of this Regulation taking into account the available technical solutions and industry best practices allowing for the application of such requirements.

2. The AI Office and the Member States shall facilitate the drawing up of codes of conduct concerning the voluntary application, including by deployers, of specific requirements to all AI systems, on the basis of clear objectives and key performance indicators to measure the achievement of those objectives, including elements such as, but not limited to:

(a) applicable elements foreseen in European ethic guidelines for trustworthy AI;

(b) assessing and minimizing the impact of AI systems on environmental sustainability, including as regards energy-efficient programming and techniques for efficient design, training and use of AI;

(c) promoting AI literacy, in particular of persons dealing with the development, operation and use of AI;

(d) facilitating an inclusive and diverse design of AI systems, including through the establishment of inclusive and diverse development teams and the promotion of stakeholders’ participation in that process;

(e) assessing and preventing the negative impact of AI systems on vulnerable persons or groups of persons, including as regards accessibility for persons with a disability, as well as on gender equality.

3. Codes of conduct may be drawn up by individual providers or deployers of AI systems or by organisations representing them or by both, including with the involvement of deployers and any interested stakeholders and their representative organisations, including civil society organisations and academia. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems.

4. The AI Office, and the Member States shall take into account the specific interests and needs of SMEs, including start-ups, when encouraging and facilitating the drawing up of codes of conduct.

The Act makes a distinction between the AI Office and the AI Board. The latter is established pursuant to Article 65:

“1. A ‘European Artificial Intelligence Board’ (the ‘Board’) is established.

2. The Board shall be composed of one representative per Member State. The European Data Protection Supervisor, shall participate as observer. The AI Office shall also attend the Board’s meetings without taking part in the votes. Other national and Union authorities, bodies or experts may be invited to the meetings by the Board on a case by case basis, where the issues discussed are of relevance for them.

2a. Each representative shall be designated by their Member State for a period of 3 years, renewable once.

2b. Member States shall ensure that their representatives in the Board:

(a) have the relevant competences and powers in their Member State so as to contribute actively to the achievement of the Board’s tasks referred to in Article 58;

(b) are designated as a single contact point vis-à-vis the Board and, where appropriate, taking into account Member States’ needs, as a single contact point for stakeholders;

(c) are empowered to facilitate consistency and coordination between national competent authorities in their Member State as regards the implementation of this Regulation, including through the collection of relevant data and information for the purpose of fulfilling their tasks on the Board.

3. The designated representatives of the Member States shall adopt the Board’s rules of procedure by a two-thirds majority. The rules of procedure shall, in particular, lay down procedures for the selection process, duration of mandate and specifications of the tasks of the Chair, the voting modalities, and the organisation of the Board’s activities and its subgroups.

3a. The Board shall establish two standing sub-groups to provide a platform for cooperation and exchange among market surveillance authorities and notifying authorities on issues related to market surveillance and notified bodies respectively.

The standing sub-group for market surveillance should act as the Administrative Cooperation Group (ADCO) for this Regulation in the meaning of Article 30 of Regulation (EU) 2019/1020.

The Board may establish other standing or temporary sub-groups as appropriate for the purpose of examining specific issues. Where appropriate, representatives of the advisory forum as referred to in Article 58a may be invited to such sub-groups or to specific meetings of those subgroups in the capacity of observers.

3b. The Board shall be organised and operated so as to safeguard the objectivity and impartiality of its activities.

  1. The Board shall be chaired by one of the representatives of the Member States. The European AI Office shall provide the Secretariat for the Board. convene the meetings upon request of the Chair and prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and its rules of procedure.”

The Act envisages the AI Office and the AI board working together. Article 56 on Codes of Practice for instance states:

“2. The AI Office and the AI Board shall aim to ensure that the codes of practice cover, but not necessarily be limited to, the obligations provided for in Articles C and D, including the following issues:

(a) means to ensure that the information referred to in Article C (a) and (b) is kept up to date in the light of market and technological developments, and the adequate level of detail for the summary about the content used for training;

(b) the identification of the type and nature of the systemic risks at Union level, including their sources when appropriate;

(c) the measures, procedures and modalities for the assessment and management of the systemic risks at Union level, including the documentation thereof. The assessment and management of the systemic risks at Union level shall be proportionate to the risks, take into consideration their severity and probability and take into account the specific challenges of tackling those risks in the light of the possible ways in which such risks may emerge and materialize along the AI value chain. (…)

4. The AI Office and the Board shall aim to ensure that the codes of practice clearly set out their specific objectives and contain commitments or measures, including key performance indicators as appropriate, to ensure the achievement of those objectives and take due account of the needs and interests of all interested parties, including affected persons, at Union level.”

Compliance obligations with respect to regulatory sandboxes, already mentioned, also apply in respect of both the AI Office and the AI board. Article 57 states inter alia:

“(…) 5b. National competent authorities shall submit to the AI Office and to the Board, annual reports, starting one year after the establishment of the AI regulatory sandbox and then every year until its termination and a final report. Those reports shall provide information on the progress and results of the implementation of those sandboxes, including best practices, incidents, lessons learnt and recommendations on their setup and, where relevant, on the application and possible revision of this Regulation, including its delegated and implementing acts, and other Union law supervised within the sandbox. Those annual reports or abstracts thereof shall be made available to the public, online. The Commission shall, where appropriate, take the annual reports into account when exercising their tasks under this Regulation.”

The Commission is also given a power to evaluate and review the functioning of the AI Office two years after the date of entry into application of the Regulation – pursuant to Article 112:

“(bb) By … [two years after the date of entry into application of this Regulation referred to in Article 85(2)] the Commission shall evaluate the functioning of the AI office, whether the office has been given sufficient powers and competences to fulfil its tasks and whether it would be relevant and needed for the proper implementation and enforcement of this Regulation to upgrade the Office and its enforcement competences and to increase its resources. The Commission shall submit this evaluation report to the European Parliament and to the Council.”

Section VII on Governance dals with other items: Article 66 considers the tasks of the AI Board, including: (a) contributing to the coordination among national competent authorities; (b) collecting and sharing technical and regulatory expertise and best practices among Member States; (c) providing advice in the implementation of the Regulation; (d) contributing to the harmonisation of administrative practices in the Member States, (e) upon the request of the Commission or on its own initiative, issue recommendations and written opinions on any relevant matters related to the implementation of the Regulation. 

The Regulation also commits to establishment of an Advisory Forum (Article 67) whose task shall be to “advise and provide technical expertise to the Board and the Commission” with membership made up of a “balanced selection of stakeholders”. The Fundamental Rights Agency, European Union Agency for Cybersecurity, the European Committee for Standardization (CEN), the European Committee for Electrotechnical Standardization (CENELEC), and the European Telecommunications Standards Institute (ETSI) shall all be permanent members of the forum.

The Regulation commits to establishing a Scientific Panel of Independent Experts (Article 68) by way of a Commission implementing Act which shall consist of “experts selected by the Commission on the basis of up-to-date scientific or technical expertise in the field of artificial intelligence” The Scientific Panel shall advise and support the AI Office. The Regulation also envisages Member-State access to the Scientific Panel for the purposes of supporting Member-State enforcement of the provisions of the Regulation (Article 69). Article 90 provides for alerts of systemic risk by the Scientific Panel.

Member State National Competent Authorities are addressed in Section 2 of Chapter VII. Each Member State shall establish or designate at least one notifying authority and at least one market surveillance authority for the purpose of this Regulation as national competent authorities. (Article 70) These national competent authorities shall exercise their powers “to safeguard the principles of objectivity of their activities and tasks and to ensure the application and implementation of this Regulation.” These authorities are themselves supervised, where required, by the European Data Protection Supervisor.[141]

The Regulation also sets down penalties for failure to comply with the Regulation (Article 99) wherein Member States shall lay down the rules on penalties (and other enforcement measures) which shall be “effective, proportionate, and dissuasive. Specific values for fines levelled for non-compliance with particular provisions has already been set out earlier in this chapter. Included among the provisions in Article 99 is a fine for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities where the value of the fine can be as much as €7.5m or up to 1% of its total worldwide annual turnover.

Article 101 is very specific in that it refers to a fine that can be imposed by the Commission for providers of general purpose AI models (Article 99 simply refers to fines imposed by Member States) and the value can be as high as €15m or 3% of its total worldwide turnover. This penalty can be considered where the Commission finds that a provider intentionally or negligently infringes the Regulation, fails to comply with a request for document or information, fails to comply with a requested measure, or, fails to make available to the Commission access to the general purpose AI model. 

Article 83 concerns formal non-compliance and addresses issues of non-compliance in respect of CE marking, or failure to register in the EU database, or, where the technical documentation has not been made available. Article 84 concerns the initiation by the Commission of a Union testing support structure which can provide information to the AI Board, the Commission or market surveillance authorities.  

Article 88 vests power in the Commission to handle enforcement in respect of General Purpose AI models. The Commission shall entrust the implementation of those tasks to the European AI Office. Article 89 concerns the power of the AI Office to monitor compliance with the Regulation by Providers of General Purpose AI models. This Article should be read in conjunction with Article 88 which delegates power to the Commission to handle enforcement in respect of this type of model and where that power is entrusted to the AI Office. This means Providers of AI models which fall into this category will deal directly with the Commission/AI Office in respect of their obligations under the Regulation. A power to request documentation or information of general purpose AI models is provided to the Commission pursuant to Article 91. A power to conduct evaluations of such models is given in Article 92. This is principally to assess compliance with the Regulation and to investigate systemic risk. The Commission is also given a residual power to request Providers to take appropriate measures to comply with obligations as a Provider of a General Purpose AI model which may include “mitigation measures”. The Article states that: “before a measure is requested, the AI Office may initiate a structured dialogue with the provider of the general purpose AI model.”

Delegation of Power under the Act is provided in Chapter XI. Article 97 of that Chapter confers power on the Commission where its powers to adopt delegated acts pursuant to various Articles in the Act are conferred for a period of 5 years. The Commission is to draw up a report in respect of the delegation of power not later than 9 months before the end of the five year period. Delegation of power can be extended at the discretion of the European Parliament, or, at the discretion of the Council. Any opposition to an extension must be made not later than three months before the end of each 5 years period (i.e. the initial five year period and any subsequent five-year extension). The Parliament and Council enjoy other powers under the Article including a power of revocation. (Article 97(3)) Article 98 envisages the Commission fulfilment of its obligations with the assistance of a committee. 

Codes of Conduct and Guidelines

As mentioned elsewhere Article 95 of the Regulation states that the AI Office and the Member States shall facilitate the drawing up of codes of practice which shall focus on the desirability of releasing to market trustworthy AI, an assessment of environmental sustainability, the promotion of AI literacy and the facilitation of inclusive and diverse design of AI systems. The section states inter alia:

The AI Office and the Member States shall facilitate the drawing up of codes of conduct concerning the voluntary application, including by deployers, of specific requirements to all AI systems, on the basis of clear objectives and key performance indicators to measure the achievement of those objectives, including elements such as, but not limited to:

(a) applicable elements foreseen in European ethic guidelines for trustworthy AI;

(b) assessing and minimizing the impact of AI systems on environmental sustainability, including as regards energy-efficient programming and techniques for efficient design, training and use of AI;

(c) promoting AI literacy, in particular of persons dealing with the development, operation and use of AI;

(d) facilitating an inclusive and diverse design of AI systems, including through the establishment of inclusive and diverse development teams and the promotion of stakeholders’ participation in that process;

(e) assessing and preventing the negative impact of AI systems on vulnerable persons or groups of persons, including as regards accessibility for persons with a disability, as well as on gender equality.

Article 96 places a responsibility on the Commission to “develop guidelines on the practical implementation of this Regulation” and “upon request of the Member States or the AI Office, or on its own initiative, the Commission shall update already adopted guidelines when deemed necessary.”

Post-Market Monitoring, Information Sharing, Market Surveillance

Chapter IX contains a number of provisions on post market monitoring, information sharing and market surveillance. Article 72 deals with Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems. It states:

1. Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system.

2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Chapter III, Section 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems. This obligation shall not cover sensitive operational data of deployers which are law enforcement authorities.

3. The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan by six months before the entry into application of this Regulation.

4. For high-risk AI systems covered by the legal acts referred to in Annex I, Section A, where a post-market monitoring system and plan is already established under that legislation, in order to ensure consistency, avoid duplications and minimise additional burdens, providers shall have a choice to integrate, as appropriate, the necessary elements described in paragraphs 1, 2 and 3 using the template referred in paragraph 3 into already existing system and plan under the Union harmonisation legislation listed in Annex I, Section A, provided it achieves an equivalent level of protection. The first subparagraph shall also apply high-risk AI systems referred to in point 5 of Annex III placed on the market or put into service by financial institutions that are subject to requirements regarding their internal governance, arrangements or processes under Union financial services legislation.

Reporting of serious incidents is treated in Article 73 which states:

1. Providers of high-risk AI systems placed on the Union market shall report any serious incident to the market surveillance authorities of the Member States where that incident occurred.

1a. As a general rule, the period for the reporting referred to in paragraph 1 shall take account of the severity of the serious incident.

1b. The notification referred to in paragraph 1 shall be made immediately after the provider has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident.

1c. Notwithstanding paragraph 1b, in the event of a widespread infringement or a serious incident as defined in Article 3(44) point (b) the report referred to in paragraph 1 shall be provided immediately, and not later than 2 days after the provider or, where applicable, the deployer becomes aware of that incident.

1d. Notwithstanding paragraph 1b, in the event of death of a person the report shall be provided immediately after the provider or the deployer has established or as soon as it suspects a causal relationship between the high-risk AI system and the serious incident but not later than 10 days after the date on which the provider or, where applicable, the deployer becomes aware of the serious incident.

1e. Where necessary to ensure timely reporting, the provider or, where applicable, the deployer, may submit an initial report that is incomplete followed up by a complete report.

1a. Following the reporting of a serious incident pursuant to the first subparagraph, the provider shall, without delay, perform the necessary investigations in relation to the serious incident and the AI system concerned. This shall include a risk assessment of the incident and corrective action. The provider shall co-operate with the competent authorities and where relevant with the notified body concerned during the investigations referred to in the first subparagraph and shall not perform any investigation which involves altering the AI system concerned in a way which may affect any subsequent evaluation of the causes of the incident, prior to informing the competent authorities of such action.

2. Upon receiving a notification related to a serious incident referred to in Article 3(44)(c), the relevant market surveillance authority shall inform the national public authorities or bodies referred to in Article 77(3). The Commission shall develop dedicated guidance to facilitate compliance with the obligations set out in paragraph 1. That guidance shall be issued 12 months after the entry into force of this Regulation, at the latest, and shall be assessed regularly.

2a. The market surveillance authority shall take appropriate measures, as provided in Article 19 of the Regulation 2019/1020, within 7 days from the date it received the notification referred to in paragraph 1 and follow the notification procedures as provided in the Regulation 2019/1020.

3. For high-risk AI systems referred to in Annex III that are placed on the market or put into service by providers that are subject to Union legislative instruments laying down reporting obligations equivalent to those set out in this Regulation, the notification of serious incidents shall be limited to those referred to in Article 3(44)(c).

3a. For high-risk AI systems which are safety components of devices, or are themselves devices, covered by Regulation (EU) 2017/745 and Regulation (EU) 2017/746 the notification of serious incidents shall be limited to those referred to in Article 3(44)(c) and be made to the national competent authority chosen for this purpose by the Member States where that incident occurred.

3a. National competent authorities shall immediately notify the Commission of any serious incident, whether or not it has taken action on it, in accordance with Article 20 of Regulation 2019/1020.

Article 74 deals with Market Surveillance and Control of AI Systems in the Union Market. Article 75 concerns Mutual Assistance, Market Surveillance and Control of General Purpose AI Systems and states inter alia that “where an AI system is based on a general purpose AI model and the model and the system are developed by the same provider, the AI office shall have powers to monitor and supervise compliance of this AI system with the obligations of this Regulation.” That Article states in full:

1. Where an AI system is based on a general purpose AI model and the model and the system are developed by the same provider, the AI office shall have powers to monitor and supervise compliance of this AI system with the obligations of this Regulation. To carry monitoring and supervision tasks the AI Office shall have all the powers of a market surveillance authority within the meaning of the Regulation 2019/1020.

2. Where the relevant market surveillance authorities have sufficient reasons to consider that general purpose AI systems that can be used directly by deployers for at least one purpose that is classified as high-risk pursuant to this Regulation, is non-compliant with the requirements laid down in this Regulation, it shall cooperate with the AI Office to carry out evaluation of compliance and inform the Board and other market surveillance authorities accordingly.

3. When a national market surveillance authority is unable to conclude its investigation on the high-risk AI system because of its inability to access certain information related to the AI model despite having made all appropriate efforts to obtain that information, it may submit a reasoned request to the AI Office where access to this information can be enforced. In this case the AI Office shall supply to the applicant authority without delay, and in any event within 30 days, any information that the AI Office considers to be relevant in order to establish whether a high-risk AI system is non-compliant. National market authorities shall safeguard the confidentiality of the information they obtain in accordance with the Article 78. The procedure provided in Chapter VI of the Regulation (EU) 1020/2019 shall apply by analogy.

Article 76 concerns Supervision of Testing in Real World Conditions by Market Surveillance Authorities and states that market surveillance authorities shall have the competence and powers to ensure that testing in real world conditions is in accordance with the Regulation.

The important provision on protecting Fundamental Rights is also contained in Chapter IX. Article 77 provides specific powers to national public authorities or bodies which supervise or enforce the protection of Fundamental Rights to request and access documentation created or maintained under the Regulation with a view to ensuring the continued effective protection of fundamental rights. Such documentation should be “in accessible language”.

Article 77 concerns confidentiality: “The Commission, market surveillance authorities and notified bodies and any other natural or legal person involved in the application of this Regulation shall, in accordance with Union or national law, respect the confidentiality of information and data obtained in carrying out their tasks and activities..” This is said to be in particular with respect to the following:

(a) intellectual property rights, and confidential business information or trade secrets of a natural or legal person, including source code, except the cases referred to in Article 5 of Directive 2016/943 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure apply;

(b) the effective implementation of this Regulation, in particular for the purpose of inspections, investigations or audits;

(ba) public and national security interests;

(c) integrity of criminal or administrative proceedings;

(da) the integrity of information classified in accordance with Union or national law;

Article 79 deals with the Procedure for Dealing with AI Systems Presenting a Risk at National Level and Article 80 deals with the Procedure for Dealing with AI Systems Classified by the Provider as a Not High-Risk in Application of Annex III. This Article permits the market surveillance authority to carry out an evaluation exercise in an appropriate case. This would be necessary where the activity of the Provider in question falls prima facie into Annex III of the Regulation but where the Provider deems the system to be not high-risk. The Article states:

1. Where a market surveillance authority has sufficient reasons to consider that an AI system classified by the provider as non high-risk in application of Annex III is high-risk, they market surveillance authority shall carry out an evaluation of the AI system concerned in respect of its classification as a high-risk AI system based on the conditions set out in Annex III and the Commission guidelines.

2. Where, in the course of that evaluation, the market surveillance authority finds that the AI system concerned is high-risk, it shall without undue delay require the relevant provider to take all necessary actions to bring the AI system into compliance with the requirements and obligations laid down in this Regulation as well as take appropriate corrective action within a period it may prescribe. (…)

Article 81 provides for a Union Safeguard procedure where a complaint by one market surveillance authority can be made against another such authority, or where the Commission considers any measure taken is contrary to Union law. The Article refers in particular to non-compliance with the restricted practices detailed in Article 5. The Article states:

Where, within three months of receipt of the notification referred to in Article 79(5), or 30 days in the case of non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5, objections are raised by the market surveillance authority of a Member State against a measure taken by another market surveillance authority, or where the Commission considers the measure to be contrary to Union law, the Commission shall without undue delay enter into consultation with the market surveillance authority of the relevant Member State and operator or operators and shall evaluate the national measure. (…)

Article 82 refers back to Article 79 and deals with the procedure where an evaluation has been carried out by a market surveillance authority and a system has been considered a risk and where (in accordance with Article 82) the system is found to present “a risk to the health or safety of persons, fundamental rights, or to other aspects of public interest protection” the Article empowers the market surveillance authority to request the Provider of the relevant system to “take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk without undue delay, within a period it may prescribe”.

A right to lodge a complaint with a surveillance authority is vested in any natural or legal person  pursuant to Article 85:

1. Without prejudice to other administrative or judicial remedies, complaints to the relevant market surveillance authority may be submitted by any natural or legal person having grounds to consider that there has been an infringement of the provisions of this Regulation.

2. In accordance with Regulation (EU) 2019/1020, complaints shall be taken into account for the purpose of conducting the market surveillance activities and be handled in line with the dedicated procedures established therefore by the market surveillance authorities.

A right to explanation of individual decision making is contained in Article 86:

1. Any affected person subject to a decision which is taken by the deployer on the basis of the output from an high-risk AI system listed in Annex III, with the exception of systems listed under point 2, and which produces legal effects or similarly significantly affects him or her in a way that they consider to adversely impact their health, safety and fundamental rights shall have the right to request from the deployer clear and meaningful explanations on the role of the AI system in the decision-making procedure and the main elements of the decision taken.

2. Paragraph 1 shall not apply to the use of AI systems for which exceptions from, or restrictions to, the obligation under paragraph 1 follow from Union or national law in compliance with Union law.

3. This Article shall only apply to the extent that the right referred to in paragraph 1 is not already provided for under Union legislation.

Article 87 mentions Reporting of Breaches and Protection of Reporting Persons. 

Biometrics

Hacker usefully discusses the development of the issue of biometrics.[142] He says the subject of live remote biometric identification, such as facial recognition technology in public spaces sits at the intersection between unacceptable and high risk under the Act.[143] It has proven to be “contentious” and almost derailed a hard-fought compromise during deliberations in the European Parliament giving rise to “intense debates and lobbying efforts.” There were two approaches: on the one hand advocates for the complete elimination of real-time remote biometric identification in public spaces, citing civil liberties, data privacy, and the potential for mass surveillance, wanted to see such identification techniques as unacceptable under the Act. On the other hand, the opposing camp argued for narrow exceptions to the rule especially in instances like crime prevention, prosecution and matters of national security.[144]The author states:

“While concerns about the abuse of live remote biometric identification technologies are legitimate, there are scenarios where their use could be both legitimate and beneficial. For example, in cases of missing children or imminent terrorist threats, the technology could prove invaluable for rapid identification and response.”[145]

An outright ban, unacceptable under the rules, would “potentially put a significant number of innocent person’s in harm’s way.”[146] Gikay, in an article,[147] argues for an incremental approach to the use of facial recognition citing a scene from the BBC science-fiction TV series The Capture as an example of the use of such technology in a world of ubiquitous facial recognition CCTV cameras. In the show a live CCTV feed is tampered with and smoothly switches from an actual event to a fabricated one in a split second. A “deep fake” TV interview with a government official is broadcast as though it were an actual interview. The programme “dramatizes and amplifies the potential dangers of growing surveillance using live facial recognition technology.”[148] Live facial recognition, says the author, “should also be reserved for serious offences, excluding minor crimes and Standard Operating Procedures (SOPs) should clearly set out this principle. Judicial approval of authorisation of deployment should be mandatory for covert use, with the possibility for fast track or post deployment approval in cases of urgency.”[149] The author also calls for law enforcement authorities to adopt a transparency procedure that is aimed at promoting accountability rather than undermining the effectiveness of law enforcement objectives.”[150]

In the EU, Recital 125 of the EU AI Act states:

“Given the complexity of high-risk AI systems and the risks that are associated to them, it is important to develop an adequate system of conformity assessment procedure for high risk AI systems involving notified bodies, so called third party conformity assessment. However, given the current experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for biometrics.”[151]

Annex III: High-Risk AI Systems Referred to in Article 6(2)

High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas:

1. Biometrics, insofar as their use is permitted under relevant Union or national law:

(a) Remote biometric identification systems. This shall not include AI systems intended to be used for biometric verification whose sole purpose is to confirm that a specific natural person is the person he or she claims to be;

(aa) AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics;

(ab) AI systems intended to be used for emotion recognition. (…)

Article 5: Prohibited Artificial Intelligence Practices            

The following artificial intelligence practices shall be prohibited:

(…)

(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement unless and in as far as such use is strictly necessary for one of the following objectives:

(i)        the targeted search for specific victims of abduction, trafficking in human beings and sexual exploitation of human beings as well as search for missing persons;

(ii)       the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack; 

(iii)      the localisation or identification of a person suspected of having committed a criminal offence, for the purposes of conducting a criminal investigation, prosecution or executing a criminal penalty for offences, referred to in Annex IIa and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years. This paragraph is without prejudice to the provisions in Article 9 of the GDPR for the processing of biometric data for purposes other than law enforcement;

The meaning of “Publicly accessible spaces” is set out in Recital 19:

“For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to an undetermined number of natural persons, and irrespective of whether the place in question is privately or publicly owned and irrespective of the activity for which the place may be used, such as commerce (for instance, shops, restaurants, cafés), services (for instance, banks, professional activities, hospitality), sport (for instance, swimming pools, gyms, stadiums), transport (for instance, bus, metro and railway stations, airports, means of transport ), entertainment (for instance, cinemas, theatres, museums, concert and conference halls) leisure or otherwise (for instance, public roads and squares, parks, forests, playgrounds). A place should be classified as publicly accessible also if, regardless of potential capacity or security restrictions, access is subject to certain predetermined conditions, which can be fulfilled by an undetermined number of persons, such as purchase of a ticket or title of transport, prior registration or having a certain age. By contrast, a place should not be considered publicly accessible if access is limited to specific and defined natural persons through either Union or national law directly related to public safety or security or through the clear manifestation of will by the person having the relevant authority on the place. The factual possibility of access alone (e.g. an unlocked door, an open gate in a fence) does not imply that the place is publicly accessible in the presence of indications or circumstances suggesting the contrary (e.g. signs prohibiting or restricting access). Company and factory premises as well as offices and workplaces that are intended to be accessed only by relevant employees and service providers are places that are not publicly accessible. Publicly accessible spaces should not include prisons or border control. Some other areas may be composed of both not publicly accessible and publicly accessible areas, such as the hallway of a private residential building necessary to access a doctor’s office or an airport. Online spaces are not covered either, as they are not physical spaces. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand.”

Ballardini, de, van Genderen and Nokelainen, in an article[152] present an alternative view. They describe Artificial Intelligence as “crucial” to pushing further emotional aspects of innovations in technology. 

“Emotional data collected from facial expressions, speech tone, physiological measurements and other sources provide a wealth of information about a person’s emotional state, but, have to be handled carefully. Although no official definition of ‘emotional data’ currently exists in EU legislation, we propose (…) the following definition of emotional data:

‘Emotional data’ is data representing the emotional, psychological or physical status of natural persons by identifying and processing their (facial) expressions, movement, behaviour, or other physical, physiological or mental characteristics.”[153]

Emotion-generating AI systems is also described by the authors as an AI application that generates data capable of altering the emotional status of natural persons to influence their mood and emotional status. The use of AI in this context, say the authors, allows us to analyse and interpret such data “at a scale and speed that was previously unimaginable.” Companies can put this data to use in various aspects of their business including by improving their customer service output and personalising experiences. 

Recital 32 of the EU AI Act is relevant:

“The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is particularly intrusive to the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights . Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, race, sex or disabilities. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities.”

Such an approach, indicative of “seemingly following a preventive law approach”, according to the authors, “does not seem to stimulate innovations, but rather leans towards blocking future – as yet unknown – developments.”[154]

“Overall this might lead to disincentivising innovations in this field, at least in the European area, reducing the competitiveness of Europe in respect to other markets such as the USA.”[155]

While allowing for what the authors indicate are, as yet, unknown future developments, the authors highlight technology already in existence:

“AI-powered systems through various types of remote sensor and intelligent camera technologies are being developed to collect large amounts of emotional and behavioural data, analyse them, understand their meaning and determine what types of reactions the system should produce to trigger certain (desirable) emotional states. An example of this are the products already developed by NViso, which provide enhanced perception and interaction features enabled by AI, including gaze and eye-state tracking, body tracking and activity and gesture recognition for driver and interior monitoring. Another example if provided by Emoshape, a New York-based start-up that launched MetaSoul in 2022 as the first sentient digital entity, allowing people to interact in the Metaverse through avatars that capture and reproduce the emotions of their human creators.”[156]

The authors conclude that:

“Regulations should be oriented to increasing people’s comfort and well-being instead of prohibiting positive services by AI processing of emotional data based on (insufficient) knowledge of negative effects.”[157]

Open Source

The question whether open source AI models are covered by the Regulation is an important one. Open-source refers to the free and open sharing of software code, allowing anyone to contribute to upgrading it or resolving bugs. Mistral, a French AI start-up, releases open source AI large language models[158] likewise Silo AI, a Finnish company[159]and Meta’s  Llama 2.[160] So are they covered by the terms of the AI Act? It has been reported that it was France that proposed exempting open-source models from strict regulation under the EU AI Act.[161] One commentator referred to the open source issue as a “conundrum”.[162] He states:

“The decision to release Mixtral 8x7B as open-source, just like Meta’s Llama 2 or the Falcon family, while championing transparency and accessibility, highlights significant public safety concerns. Generally, open-source models present undeniable advantages that are essential in the broader AI landscape. They act as a counterbalance to monopolizing tendencies in the foundation model market, fostering a more diverse, competitive, and accessible AI ecosystem. However, once powerful enough, the risks of open sourcing arguably outweigh the benefits. Unregulated access to such powerful models can lead to malicious abuse, including malware generation and terrorist uses. Importantly, if the model can be downloaded, safety layers can be quite easily – and even inadvertently – removed.”[163]

The final agreed text of the Regulation does exempt open source general purpose AI models from some of the requirements under the Act but does not exempt such models from either their transparency obligations, if they arise, under Title IV, or, where they raise an unacceptable risk under Title II. They are likewise not exempt where they constitute a model of High-risk. 

Article 2 states:

“The obligations laid down in this Regulation shall not apply to AI systems released under free and open source licences unless they are placed on the market or put into service as high-risk AI systems or an AI system that falls under Title II [unacceptable risk] and IV [Transparency].”

Any open source AI model presenting systemic risk is covered under the full force of the Regulation. Recital 112 states as follows:

“It is also necessary to clarify a procedure for the classification of a general purpose AI model with systemic risks. A general purpose AI model that meets the applicable threshold for high-impact capabilities should be presumed to be a general purpose AI models with systemic risk. The provider should notify the AI Office at the latest two weeks after the requirements are met or it becomes known that a general purpose AI model will meet the requirements that lead to the presumption. This is especially relevant in relation to the FLOP threshold because training of general purpose AI models takes considerable planning which includes the upfront allocation of compute resources and, therefore, providers of general purpose AI models are able to know if their model would meet the threshold before the training is completed. In the context of this notification, the provider should be able to demonstrate that because of its specific characteristics, a general purpose AI model exceptionally does not present systemic risks, and that it thus should not be classified as a general purpose AI model with systemic risks. This information is valuable for the AI Office to anticipate the placing on the market of general purpose AI models with systemic risks and the providers can start to engage with the AI Office early on. This is especially important with regard to general-purpose AI models that are planned to be released as open-source, given that, after open-source model release, necessary measures to ensure compliance with the obligations under this Regulation may be more difficult to implement.”

Not all open source case scenarios are covered by the exemption however: where there is a requisite monetisation of the source code, the model cannot avail of the exemption:

“Free and open-source AI components covers the software and data, including models and general purpose AI models, tools, services or processes of an AI system. Free and open-source AI components can be provided through different channels, including their development on open repositories. For the purpose of this Regulation, AI components that are provided against a price or otherwise monetised, including through the provision of technical support or other services, including through a software platform, related to the AI component, or the use of personal data for reasons other than exclusively for improving the security, compatibility or interoperability of the software, with the exception of transactions between micro enterprises, should not benefit from the exceptions provided to free and open source AI components. The fact of making AI components available through open repositories should not, in itself, constitute a monetisation.”[164]

Downstream systems

Recital 101 refers to the role that providers of general-purpose AI models have in the value chain in that their models may form the basis for a range of downstream systems. The Regulation asks that “proportionate transparency measures” should be adopted for downstream providers which facilitates their understand of the system. Recital 101 states: 

“Providers of general-purpose AI models have a particular role and responsibility in the AI value chain, as the models they provide may form the basis for a range of downstream systems, often provided by downstream providers that necessitate a good understanding of the models and their capabilities, both to enable the integration of such models into their products, and to fulfil their obligations under this or other regulations. Therefore, proportionate transparency measures should be foreseen, including the drawing up and keeping up to date of documentation, and the provision of information on the general purpose AI model for its usage by the downstream providers. Technical documentation should be prepared and kept up to date by the general purpose AI model provider for the purpose of making it available, upon request, to the AI Office and the national competent authorities. The minimal set of elements contained in such documentations should be outlined, respectively, in Annex (XY) and Annex (XX). The Commission should be enabled to amend the Annexes by delegated acts in the light of the evolving technological developments.”

Recital 102 places a type of exemption on open source general purpose AI systems in circumstances where the type of information referred to in Recital 60e has already been made publicly available. This exemption from the transparency requirement would not apply where the relevant model was one which presented systemic risks. Recital 102 states:

“The providers of general purpose AI models that are released under a free and open source license, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available should be subject to exceptions as regards the transparency-related requirements imposed on general purpose AI models, unless they can be considered to present a systemic risk, in which case the circumstance that the model is transparent and accompanied by an open source license should not be considered a sufficient reason to exclude compliance with the obligations under this Regulation. In any case, given that the release of general purpose AI models under free and open source licence does not necessarily reveal substantial information on the dataset used for the training or fine-tuning of the model and on how thereby the respect of copyright law was ensured, the exception provided for general purpose AI models from compliance with the transparency-related requirements should not concern the obligation to produce a summary about the content used for model training and the obligation to put in place a policy to respect Union copyright law in particular to identify and respect the reservations of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790.”[165]

Generative AI

The Act embraces generative AI models like those of Chat GPT, already mentioned,[166] as well as LLaMA, DALLE, Midjourney, discussed earlier within the context of copyright issues,[167] and Stable Diffusion, likewise discussed in chapter 1 within the context of the copyright cases of Getty Images and Li v Liu.[168] It should be pointed out that generative AI models are not identical with foundational models though the relevant foundational models today are also generative.[169]

The recitals state that: “large generative AI models are a typical example for a general-purpose AI model, given that they allow for flexible generation of content (such as in the form of text, audio, images or video) that can readily accommodate a wide range of distinctive tasks.”[170]

Furthermore it states: 

“General purpose models, in particular large generative models, capable of generating text, images, and other content, present unique innovation opportunities but also challenges to artists, authors, and other creators and the way their creative content is created, distributed, used and consumed. The development and training of such models require access to vast amounts of text, images, videos, and other data. Text and data mining techniques may be used extensively in this context for the retrieval and analysis of such content, which may be protected by copyright and related rights. Any use of copyright protected content requires the authorization of the rightholder concerned unless relevant copyright exceptions and limitations apply.”[171]

Generative AI models are covered by the requirements of transparency,[172] the obligation to prevent the model from generating illegal content,[173] and the publication of summaries of copyright data used for training of the model.[174]Recital 108 states:

“In order to increase transparency on the data that is used in the pre-training and training of general purpose AI models, including text and data protected by copyright law, it is adequate that providers of such models draw up and make publicly available a sufficiently detailed summary of the content used for training the general purpose model. While taking into due account the need to protect trade secrets and confidential business information, this summary should be generally comprehensive in its scope instead of technically detailed to facilitate parties with legitimate interests, including copyright holders, to exercise and enforce their rights under Union law, for example by listing the main data collections or sets that went into training the model, such as large private or public databases or data archives, and by providing a narrative explanation about other data sources used. It is appropriate for the AI Office to provide a template for the summary, which should be simple, effective, and allow the provider to provide the required summary in narrative form.”

The Act also makes reference to general-purpose AI models presenting systemic risk.[175] Remember, not all generative AI models will constitute a general-purpose AI model presenting systemic risk. The classification of such a model is determined by a threshold of Floating Point Operations (FLOPs) set down initially in Article 52a and now in Art. 51 as where the cumulative amount of compute used for the training of the model measured in FLOPs is greater than 10^25.[176]

“Therefore, an initial threshold of FLOPs should be set. Which, if met by a general-purpose AI model, leads to a presumption that the model is a general-purpose AI model with systemic risks. This threshold should be adjusted over time to reflect technological and industrial changes, such as algorithmic improvements or increased hardware efficiency, and should be supplemented with benchmarks and indicators for model capability.”[177]

The Commission also retains a power to designate a model as a general-purpose model presenting systemic risk[178]where the model has capabilities or impact equivalent to those captured by the set threshold.[179] The Commission also has the power to amend the FLOPs threshold above pursuant to its power in Article 51(3).[180] At the time of writing one source considered only, one, and possibly two, models passed the threshold.[181] The European Parliament in a release only mentioned GPT 4.[182]

Classification as a general-purpose model presenting systemic risk brings with it all of the obligations of all general purpose AI models (see earlier in this chapter) and also additional obligations: identification and mitigation of risk,[183] ensuring an adequate level of cybersecurity protection,[184] model evaluations,[185] including conducting and documenting adversarial testing of models, continuous assessment and mitigation of systemic risks,[186] including by putting in place risk management policies, implementing post-market monitoring, taking appropriate measures along the entire model’s lifecycle and cooperating with relevant actors across the AI value chain.[187]   

There are also obligations around notification of a serious incident where the model provider should without undue delay keep track of the incident and report any relevant information and possible corrective measures to the Commission and national competent authorities.[188]

As stated previously in this chapter the relevant provider of a general purpose AI model presenting systemic risks must notify the Commission within 2 weeks after those requirements have been met, or, it becomes known that those requirements will be met.[189]

Recital 114 states:

“The providers of general-purpose AI models presenting systemic risks should be subject, in addition to the obligations provided for providers of general-purpose AI models, to obligations aimed at identifying and mitigating those risks and ensuring an adequate level of cybersecurity protection, regardless of whether it is provided as a standalone model or embedded in an AI system or a product. To achieve these objectives, the Regulation should require providers to perform the necessary model evaluations, in particular prior to its first placing on the market, including conducting and documenting adversarial testing of models, also, as appropriate, through internal or independent external testing. In addition, providers of general-purpose AI models with systemic risks should continuously assess and mitigate systemic risks, including for example by putting in place risk management policies, such as accountability and governance processes, implementing post-market monitoring, taking appropriate measures along the entire model’s lifecycle and cooperating with relevant actors across the AI value chain.”

Recital 115 states:

“Providers of general purpose AI models with systemic risks should assess and mitigate possible systemic risks. If, despite efforts to identify and prevent risks related to a general-purpose AI model that may present systemic risks, the development or use of the model causes a serious incident, the general purpose AI model provider should without undue delay keep track of the incident and report any relevant information and possible corrective measures to the Commission and national competent authorities. Furthermore, providers should ensure an adequate level of cybersecurity protection for the model and its physical infrastructure, if appropriate, along the entire model lifecycle. Cybersecurity protection related to systemic risks associated with malicious use of or attacks should duly consider accidental model leakage, unsanctioned releases, circumvention of safety measures, and defence against cyberattacks, unauthorised access or model theft. This protection could be facilitated by securing model weights, algorithms, servers, and datasets, such as through operational security measures for information security, specific cybersecurity policies, adequate technical and established solutions, and cyber and physical access controls, appropriate to the relevant circumstances and the risks involved.”

Recital 111 states:

It is appropriate to establish a methodology for the classification of general-purpose AI models as general-purpose AI model with systemic risks. Since systemic risks result from particularly high capabilities, a general-purpose AI models should be considered to present systemic risks if it has high-impact capabilities, evaluated on the basis of appropriate technical tools and methodologies, or significant impact on the internal market due to its reach. High-impact capabilities in general purpose AI models means capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models. The full range of capabilities in a model could be better understood after its release on the market or when users interact with the model. According to the state of the art at the time of entry into force of this Regulation, the cumulative amount of compute used for the training of the general-purpose AI model measured in floating point operations (FLOPs) is one of the relevant approximations for model capabilities. The amount of compute used for training cumulates the compute used across the activities and methods that are intended to enhance the capabilities of the model prior to deployment, such as pre-training, synthetic data generation and fine-tuning. Therefore, an initial threshold of FLOPs should be set, which, if met by a general-purpose AI model, leads to a presumption that the model is a general-purpose AI model with systemic risks. This threshold should be adjusted over time to reflect technological and industrial changes, such as algorithmic improvements or increased hardware efficiency, and should be supplemented with benchmarks and indicators for model capability. To inform this, the AI Office should engage with the scientific community, industry, civil society and other experts. Thresholds, as well as tools and benchmarks for the assessment of high-impact capabilities, should be strong predictors of generality, its capabilities and associated systemic risk of general-purpose AI models, and could take into taking into account the way the model will be placed on the market or the number of users it may affect. To complement this system, there should be a possibility for the Commission to take individual decisions designating a general-purpose AI model as a general-purpose AI model with systemic risk if it is found that such model has capabilities or impact equivalent to those captured by the set threshold. This decision should be taken on the basis of an overall assessment of the criteria set out in Annex IXc, such as quality or size of the training data set, number of business and end users, its input and output modalities, its degree of autonomy and scalability, or the tools it has access to. Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk, the Commission should take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks.”

Article 52 states:

“1. Where a general-purpose AI model meets the requirements referred to in points (a) of Article A(1), the relevant provider shall notify the Commission without delay and in any event within 2 weeks after those requirements are met or it becomes known that these requirements will be met. That notification shall include the information necessary to demonstrate that the relevant requirements have been met. If the Commission becomes aware of a general purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as a model with systemic risk.”

Foundational Models

While not all generative AI models are foundational all of the current foundational models are generative – this means the provisions on generative AI model discussed above apply to the foundational models. Into this category fall particularly potent models such as GPT 4, Claude, by Anthropic, PaLM and Bard, by Google and LLaMA by Meta. To this initial list can now be added Mistral AI’s Mixtral 8x7B and Mistral 7B foundation models. These models have been trained on large amounts of data and form the basis for various downstream applications. According to one source the Council had formed a view that would “unequivocally designate such models as high-risk applications” – a move which would have turned the application-based architecture of the Regulation on its head.[190] In the legislative process there were various proposals on how to regulate the issue of foundational models. The European Parliament, for example, set down, in one of its positions, specific provisions on “foundation models.”[191] In the result the final version of the Act sets down provisions in respect of general-purpose AI models, see earlier in this chapter, and, indicates that large generative AI models are a typical example for a general-purpose AI model. 

Recital 99 states:

Large generative AI models are a typical example for a general-purpose AI model, given that they allow for flexible generation of content (such as in the form of text, audio, images or video) that can readily accommodate a wide range of distinctive tasks.”[192]

Recital 107 states:

“General-purpose models, in particular large generative models, capable of generating text, images, and other content, present unique innovation opportunities but also challenges to artists, authors, and other creators and the way their creative content is created, distributed, used and consumed. The development and training of such models require access to vast amounts of text, images, videos, and other data. Text and data mining techniques may be used extensively in this context for the retrieval and analysis of such content, which may be protected by copyright and related rights. Any use of copyright protected content requires the authorization of the rightholder concerned unless relevant copyright exceptions and limitations apply. Directive (EU) 2019/790 introduced exceptions and limitations allowing reproductions and extractions of works or other subject matter, for the purposes of text and data mining, under certain conditions. Under these rules, rightholders may choose to reserve their rights over their works or other subject matter to prevent text and data mining, unless this is done for the purposes of scientific research. Where the rights to opt out has been expressly reserved in an appropriate manner, providers of general-purpose AI models need to obtain an authorisation from rightholders if they want to carry out text and data mining over such works.”[193]

As we already saw, such general purpose AI systems, may be used as a high-risk system by themselves, or, may be components of other high-risk systems. 

Recital 85 states:

“General purpose AI systems may be used as high-risk AI systems by themselves or be components of other high risk AI systems. Therefore, due to their particular nature and in order to ensure a fair sharing of responsibilities along the AI value chain, the providers of such systems should, irrespective of whether they may be used as high-risk AI systems as such by other providers or as components of high-risk AI systems and unless provided otherwise under this Regulation, closely cooperate with the providers of the respective high-risk AI systems to enable their compliance with the relevant obligations under this Regulation and with the competent authorities established under this Regulation.

A stricter regime, designed to catch so-called ‘high impact’ foundation models,[194] see section on foundational models above, provides for particular obligations on any High-risk AI system presenting systemic risk. By its nature, based on the current technology, any system that falls into this category will also fall into the category of being a foundational model. 

AI Regulatory Sandbox

As given above the Regulation sets down the concept of a regulatory sandbox.[195] Article 57 states as follows:

“AI regulatory sandboxes established under Article 53(1) of this Regulation shall, in accordance with Articles 53 and 53a, provide for a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific sandbox plan agreed between the prospective providers and the competent authority. Such regulatory sandboxes may include testing in real world conditions supervised in the sandbox.”

Article 57 states:

“The establishment of AI regulatory sandboxes shall aim to contribute to the following objectives:

One author says of regulatory sandboxes:

“Regulatory sandboxes, viewed as a regulation strategy, are embedded in the broader context of how to address legal problems posed by new technologies. When new technologies emerge, the legislator can rely on administrative authorities, courts, legal scholarship to apply existing law to new socio-technical phenomena.”[196]

Deep Fakes

As mentioned elsewhere[197] deepfakes constitute limited risk AI systems under the Regulation. A definition for the concept of deep fakes has already been given:

“deep fake” means AI generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful”.[198]

The EU AI Act creates a specific transparency safety risk. When deploying AI systems such as chatbots, users should be aware that they are interacting with a machine.[199] Deep fakes and other AI generated content will have to be labelled as such,[200] and users need to be informed when biometric categorisation or emotion recognition systems are being used.[201] In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.[202]

Overarching Aim

It’s worth noting the aim in the Regulation set out in Recital 27 wherein it refers to the design of a “coherent, trustworthy and human-centric Artificial Intelligence”. The Recital states:

While the risk-based approach is the basis for a proportionate and effective set of binding rules, it is important to recall the 2019 Ethics Guidelines for Trustworthy AI developed by the independent High-Level Expert Group on AI (HLEG) appointed by the Commission. In those Guidelines the HLEG developed seven non-binding ethical principles for AI which should help ensure that AI is trustworthy and ethically sound. The seven principles include: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being and accountability. Without prejudice to the legally binding requirements of this Regulation and any other applicable Union law, these Guidelines contribute to the design of a coherent, trustworthy and human-centric Artificial Intelligence, in line with the Charter and with the values on which the Union is founded. According to the Guidelines of HLEG, human agency and oversight means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. Technical robustness and safety means that AI systems are developed and used in a way that allows robustness in case of problems and resilience against attempts to alter the use or performance of the AI system so as to allow unlawful use by third parties, and minimise unintended harm.

Recitals

The Recitals are also a useful source. Aside from those already mentioned there are provisions on the purpose of the Regulation: Recital 1 states:

The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, placing on the market, putting into service and the use of artificial intelligence systems in the Union in conformity with Union values, to promote the uptake of human centric and trustworthy artificial intelligence while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy and rule of law and environmental protection, against harmful effects of artificial intelligence systems in the Union and to support innovation. This regulation ensures the free movement of AI-based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of Artificial Intelligence systems (AI systems), unless explicitly authorised by this Regulation.

Recital 2 commits the Regulation to confirm to Union values:

This Regulation should be applied in conformity with the values of the Union enshrined in the Charter facilitating the protection of individuals, companies, democracy and rule of law and the environment while boosting innovation and employment and making the Union a leader in the uptake of trustworthy AI.

Recital 6 mentions the values of the Union:

Given the major impact that artificial intelligence can have on society and the need to build trust, it is vital for artificial intelligence and its regulatory framework to be developed according to Union values enshrined in Article 2 TEU, the fundamental rights and freedoms enshrined in the Treaties, the Charter. As a pre-requisite, artificial intelligence should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being.

Recital 7 specifically references the Charter of Fundamental Rights:

In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common rules for all high-risk AI systems should be established. Those rules should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments. They should also take into account the European Declaration on Digital Rights and Principles for the Digital Decade (2023/C 23/01) and the Ethics Guidelines for Trustworthy Artificial Intelligence (AI) of the High-Level Expert Group on Artificial Intelligence.

Recital 10 refers to the fundamental right of the safeguarding of personal data:

The fundamental right to the protection of personal data is safeguarded in particular by Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive 2016/680. Directive 2002/58/EC additionally protects private life and the confidentiality of communications, including by way of providing conditions for any personal and non-personal data storing in and access from terminal equipment. Those Union legal acts provide the basis for sustainable and responsible data processing, including where datasets include a mix of personal and non-personal data. This Regulation does not seek to affect the application of existing Union law governing the processing of personal data, including the tasks and powers of the independent supervisory authorities competent to monitor compliance with those instruments. It also does not affect the obligations of providers and deployers of AI systems in their role as data controllers or processors stemming from national or Union law on the protection of personal data in so far as the design, the development or the use of AI systems involves the processing of personal data. It is also appropriate to clarify that data subjects continue to enjoy all the rights and guarantees awarded to them by such Union law, including the rights related to solely automated individual decision-making, including profiling. Harmonised rules for the placing on the market, the putting into service and the use of AI systems established under this Regulation should facilitate the effective implementation and enable the exercise of the data subjects’ rights and other remedies guaranteed under Union law on the protection of personal data and of other fundamental rights.

Liability of Intermediary Service Providers pursuant to Union law is specifically mentioned in Recital 11:

This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act].

Recital 21 makes clear there is no distinction between providers within and outside the Union:

In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to deployers of AI systems established within the Union.

Recital 25 excludes certain research efforts:

This Regulation should support innovation, respect freedom of science, and should not undermine research and development activity. It is therefore necessary to exclude from its scope AI systems and models specifically developed and put into service for the sole purpose of scientific research and development. Moreover, it is necessary to ensure that the Regulation does not otherwise affect scientific research and development activity on AI systems or models prior to being placed on the market or put into service. As regards product oriented research, testing and development activity regarding AI systems or models, the provisions of this Regulation should also not apply prior to these systems and models being put into service or placed on the market. This is without prejudice to the obligation to comply with this Regulation when an AI system falling into the scope of this Regulation is placed on the market or put into service as a result of such research and development activity and to the application of provisions on regulatory sandboxes and testing in real world conditions. Furthermore, without prejudice to the foregoing regarding AI systems specifically developed and put into service for the sole purpose of scientific research and development, any other AI system that may be used for the conduct of any research and development activity should remain subject to the provisions of this Regulation. Under all circumstances, any research and development activity should be carried out in accordance with recognised ethical and professional standards for scientific research and should be conducted according to applicable Union law.

Union values are again referred to in Recital 28:

Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and abusive and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child.

Provisions prohibiting certain practices under Union law are not affected by the Regulation:

Practices that are prohibited by Union legislation, including data protection law, nondiscrimination law, consumer protection law, and competition law, should not be affected by this Regulation.

Recital 70 again mentions data protection:

The right to privacy and to protection of personal data must be guaranteed throughout the entire lifecycle of the AI system. In this regard, the principles of data minimisation and data protection by design and by default, as set out in Union data protection law, are applicable when personal data are processed. Measures taken by providers to ensure compliance with those principles may include not only anonymisation and encryption, but also the use of technology that permits algorithms to be brought to the data and allows training of AI systems without the transmission between parties or copying of the raw or structured data themselves, without prejudice to the requirements on data governance provided for in this Regulation.

Recital 174 states that a review is scheduled of the EU AI Act within three years:

Given the rapid technological developments and the required technical expertise in the effective application of this Regulation, the Commission should evaluate and review this Regulation by three years after the date of entry into application and every four years thereafter and report to the European Parliament and the Council. In addition, taking into account the implications for the scope of this Regulation, the Commission should carry out an assessment of the need to amend the list in Annex III and the list of prohibited practices once a year. Moreover, by two years after entry into application and every four years thereafter, the Commission should evaluate and report to the European Parliament and to the Council on the need to amend the high-risk areas in Annex III, the AI systems within the scope of the transparency obligations in Chapter IV, the effectiveness of the supervision and governance system and the progress on the development of standardisation deliverables on energy efficient development of general-purpose AI models, including the need for further measures or actions. Finally, within two years after the entry into application and every three years thereafter, the Commission should evaluate the impact and effectiveness of voluntary codes of conducts to foster the application of the requirements set out in Chapter III, Section 2, for systems other than high-risk AI systems and possibly other additional requirements for such AI systems.

Comment

Many will be familiar with The Brussels Effect,[203] wherein author Anu Bradford does a tremendous service by highlighting the impact regulation in Brussels has had beyond its borders. She states:

“The idea for this book was born as a reaction to the nearly constant public commentary about the European Union’s demise or global irrelevance that permeates modern popular discourse. That narrative contradicted the data and patterns I observed in my own academic research, which provided many profound examples of the EU’s global regulatory power and influence. Accurately examining these examples affirms the EU’s continuing, even growing, global relevance to the conduct of international regulatory affairs. These conflicting narratives sparked the idea to initially write an article about the mechanisms driving the EU’s decline, and to advance a more informed view of the EU’s role in the world. In that article, published in 2012 in Northwestern University Law Review, I coined the term the “Brussels Effect” – to capture the origins of the EU’s power that stems from its Brussels-based institutions and to pay tribute to, and to build on, David Vogel’s pathbreaking work on the California Effect.”[204]

Bradford in her book gives various examples: market competition, digital economy, consumer health and safety, and environment; and to this list can be added, probably, already, the field of Artificial Intelligence (“AI”). 

It’s worth noting that both the United States of America[205] and China[206] have indicated in their respective provisions that each wishes to set global standards. So, will the EU approach be the bench-mark?[207]  Like many things in this space we will have to wait and see. Certainly, the EU was cognisant of the overarching issue of public safety when it drafted its provisions. This is evident was the late changes that it made – often in the teeth of criticism from industry – to include general purpose artificial intelligence systems in its regulation and to deal with them as robustly as the exigencies of diplomacy would allow. While industry claimed this would impact negatively innovation the EU carried on regardless. A late drive to pull back on the proposals by France, Italy and Germany[208] – threatening to send the whole project back to the drawing board – were themselves dismissed after one of the indigenous companies that had lobbied for intervention was subsequently the subject of significant investment by Microsoft.[209]  

Consequently, the EU AI Act came into force in August 2024 with the prohibitions against :acceptable risk” AI in force from 2 February 2025; a range of provisions go into effect on 2 August 2025 in respect of general purpose artificial intelligence systems; provisions on high risk AI systems including biometrics, critical infrastructure, education and employment are in force from 2 August 2026; all in accordance with Article 113. It constitutes the world’s most comprehensive horizontal legal framework on regulation of Artificial Intelligence systems.  The process took 6 years to complete after initial frameworks and drafts were proposed as early as 2018.[210] Consequently, while it was initially the first attempt of its kind anywhere in the world: subsequent enactments in this space in Brazil and China, and the relevant Executive Order in the United States of America, were all in train before the coming into force of the EU AI Act: both China and the USA crossed the line ahead of the EU. Still, there is no doubt that the EU AI Act, at almost x pages, is clearly the most comprehensive instrument. It is the result of a six-year-inter-institutional dialogue that has unearthed virtually every conceivable Artificial Intelligence regulatory issue extant – at the moment.

One source[211] says that on the basis of the requirement for Lawful AI they have developed the concept of ‘Legal Trustworthiness’ stating that this concept requires that regulation of AI is based on three pillars: (1) appropriately allocates responsibility for the harms and wrongs resulting from AI systems, especially where these pertain to fundamental rights;[212] (2) establishes and maintains a coherent legal framework accompanied by effective and legitimate enforcement mechanisms to secure and uphold the rule of law;[213] and (3) places democratic deliberation at its centre, which includes the conferral of public participation and other information rights necessary for effective democracy.[214]

The authors cite the European Commission’s High-Level Expert Group on AI publications including Ethics Guidelines for Trustworthy AI[215] wherein the report states:

“Trustworthy AI has three components, which should be met throughout the system’s entire life cycle:

Since even with good intentions, AI systems can cause unintentional harm. Each component in itself is necessary but not sufficient for the achievement of Trustworthy AI. Ideally, all three components work in harmony and overlap in their operation. If, in practice, tensions arise between these components, society should endeavour to align them.”[216]

These guidelines lack legal force and their operative provisions are confined to ethical and socio-technically robust AI – no mention of Lawful AI is discussed. The authors indicate there understanding that the EU AI Act is meant to address this third component – that of lawful AI. 

The authors state:

“For legality to contribute to ‘Trustworthiness,’ it is crucial that the legal framework itself adequately deals with the risks associated with AI. This desideratum goes far beyond simple legal compliance checks – it requires the existence of a regulatory framework which addresses the foundational values of fundamental rights, the rule of law, and democracy.”[217]

Conclusion

It is undeniable that the EU AI Act is a significant milestone in AI regulation. On its passing there was some criticism from both sides, however. One of the views expressed was that while the largest foundational models, like GPT 4, are covered by the Act, the Act constitutes “relatively weak regulation”[218] The issue around Mistral, a French tech start-up that had successfully lobbied the French government, only to subsequently take significant investment from Microsoft, struck some as pointing to a threat of tech monopolies in this space.[219] Certainly the issue is complex. At the time of writing Microsoft, with investments in Open AI and Mistral, and the roll out of large-language model technology in its Bing Copilot, has a decisive first-mover advantage in the market place. Whether this changes or whether Microsoft grows to dominate this sector, much in the way that Google dominated Web 2.0, is surely a cause of concern for policy-makers. 

On the other side of the aisle the technology companies that had lobbied hard for softer regulation. The Computer & Communications Industry Association was quoted as saying that the act imposed “stringent obligations” on developers of cutting-edge technologies that underpin many downstream systems and is therefore likely to hinder innovation in Europe. This could lead to an exodus of AI talent, it warned.[220] One source demonstrated that many organisations “lack procedures for technical documentation and do not have someone trained to determine compliance requirements.[221]

One commentator has pointed to omissions in the Act that will require to be filled later: mandatory basic AI safety standards; the conundrum of open-source models; the environmental impact of AI; and the need to accompany the AI Act with far more substantial public investment in AI.[222] One source even notes that upwards of 70 pieces of secondary legislation will be required to support implementation of the AI Act[223] and the issue of AI and copyright is subject to active discussions between the EU and Members States.[224] Hacker considers that immediate action is required to create protocols for regulated access to high-performance, potentially open-source AI systems.[225] He also notes that the EU with its “command-and-control” regulatory style, differs from, say, the United Kingdom which has leaned more towards a self-regulatory model which emphasises AI safety and existential risk.[226] Another source considers a sectoral approach advocating an incremental regulation would have been preferable and points the UK towards this approach. [227]


[1] https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473

[2] See Chapter 5 on Artificial Intelligence and Liability. 

[3] AI Watch (2020), Estimating Investments in General Purpose Technologies: The case of AI Investments in Europe, European Commission Joint Research Centre at p. 17 and cited in “AI – Here for Good A National Artificial Intelligence Strategy for Ireland” at p. 47 available at https://enterprise.gov.ie/en/publications/publication-files/national-ai-strategy.pdf

[4] “AI – Here for Good A National Artificial Intelligence Strategy for Ireland” available at https://enterprise.gov.ie/en/publications/publication-files/national-ai-strategy.pdf

[5] https://enterprise.gov.ie/en/publications/publication-files/national-ai-strategy.pdf

[6] Ibid at p. 17 et seq.

[7] Ibid at p. 21 et seq. 

[8] Ibid, at p. 21.

[9] Ibid. See The New York Times report about an incorrect facial recognition match for 3 individuals in Detroit. (https://www.nytimes.com/2024/06/29/technology/detroit-facial-recognition-false-arrests)

[10] https://enterprise.gov.ie/en/publications/national-ai-strategy-refresh-2024.html#:~:text=The%20refresh%20of%20Ireland’s%20National,was%20launched%20in%20July%202021.

[11] See chapter 2

[12] See https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/

[13] See Armstrong, Smarter than Us, Machine Intelligence Research Institute, available online at https://smarterthan.us

[14] https://intelligence.org/2013/05/15/when-will-ai-be-created/

[15] https://intelligence.org/2013/05/15/when-will-ai-be-created/

[16] https://www.cnbc.com/2022/10/12/us-chip-export-restrictions-could-hobble-chinas-semiconductor-goals.html

[17] “How the US chip export controls have turned the screws on China”, Financial Times,  October 22, 2022, available at (subscription needed) https://www.ft.com/content/bbbdc7dc-0566-4a05-a7b3-27afd82580f3

[18] The Financial Times (Subscription needed) https://www.ft.com/content/fd5c19b7-6b55-4788-92b6-55e04a11d717

[19] https://hai.stanford.edu/news/china-and-united-states-unlikely-partners-ai

[20] https://www.ft.com/content/d3847dbb-9ee1-4868-8734-98c8ab1feb91

[21] https://www.lesswrong.com/posts/wNrbHbhgPJBD2d9v6/language-models-are-a-potentially-safe-path-to-human-level

[22] https://www.europarl.europa.eu/news/en/press-room/20230609IPR96212/meps-ready-to-negotiate-first-ever-rules-for-safe-and-transparent-ai

[23] https://www.siliconrepublic.com/machines/eu-ai-act-approved-majority-vote-risk

[24] Yudkowsky, “Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures” Machine Intelligence Research Institute, Available at https://intelligence.org/files/CFAI.pdf

[25] See chapter 2

[26] Bostrom, Superintelligence, Oxford University Press, 2014.

[27] See chapter 2.

[28] See chapter 2

[29] https://intelligence.org/files/AIFoomDebate.pdf

[30] 2021/0106(COD) Proposed Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain union legislative Acts https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN

[31] See the Working Paper of the Future of Life Institute, entitled, A Proposal for a Definition of General Purpose Artificial Intelligence Systems, available at  https://futureoflife.org/wp-content/uploads/2022/11/SSRN-id4238951-1.pdf

[32] Hacker, P. (2023). AI Regulation in Europe: From the AI Act to Future Regulatory Challenges. Computing Research Repository, 2023(2310) at p. 1.

[33] See later in this chapter for discussion of the different categories.

[34] This had originally been classified as limited risk/low risk: “The regulation follows a risk-based approach, differentiating between uses of AI that create (i) an unacceptable risk, (ii) a high risk, and (iii) low or minimal risk”. See recital 5.2.2 of the original text of the AI Act as released by the European Commission 2021/0106(COD) Proposed Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain union legislative Acts https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN Minimal risk is referred to by the European Commission here: https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473

[35] https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473

[36] Emphasis added

[37] See, for example, Art 29(3): “[D]eployer shall ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system”.

[38] See recital 32 of  the European Council compromise text, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, text for the Committee of the Permanent Representatives of the Governments of the Member States to the European Union,  issued in November 2022 and available here: https://artificialintelligenceact.eu/wp-content/uploads/2022/11/AIA-CZ-Draft-for-Coreper-3-Nov-22.pdf See also the comments of the European Commission: “The risk classification is based on the intended purpose of the AI system, in line with the existing EU product safety legislation. It means that the classification of the risk depends on the function performed by the AI system and on the specific purpose and modalities for which the system is used”.  https://ec.europa.eu/commission/presscorner/detail/en/qanda_21_1683

[39] Novelli, C., Casolari, F., Rotolo, A., Taddeo, M., & Floridi, L. (2024). AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act. Digital Society, 3(1) at 1.

[40] Ibid at 2.

[41] Ibid

[42] Ibid at 2.

[43] Art 27

[44] Art 27

[45] Floridi, Luciano and Holweg, Matthias and Taddeo, Mariarosaria and Amaya, Javier and Mökander, Jakob and Wen, Yuni, capAI – A Procedure for Conducting Conformity Assessment of AI Systems in Line with the EU Artificial Intelligence Act (March 23, 2022). Available at SSRN: https://ssrn.com/abstract=4064091 or http://dx.doi.org/10.2139/ssrn.4064091

[46] The following is the text of the European Union, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, text for the Committee of the Permanent Representatives of the Governments of the Member States to the European Union,  issued in November 2022 and available here: https://artificialintelligenceact.eu/wp-content/uploads/2022/11/AIA-CZ-Draft-for-Coreper-3-Nov-22.pdf

[47] Subsection added to final draft

[48] This was added to final draft.

[49] This was added to final draft.

[50] This was added to final draft.

[51] This was added to the final draft

[52] See generally Mazzini, G., & Bagni, F. (2023). Considerations on the regulation of AI systems in the financial sector by the AI Act. Frontiers in Artificial Intelligence, 6,

[53] Ibid

[54] Previously this read: AI systems intended to be used for risk assessment in relation to natural persons and pricing in the case of life and health insurance with the exception of AI systems put into service by providers that are micro and small-sized enterprises as defined in the Annex of Commission Recommendation 2003/361/EC for their own use.

[55] This was amended in the final text. 

[56] See Section 1 of Chapter 3 of the text of the proposed Regulation.  

[57] See recital 5.2.3 High Risk AI Systems.

[58] Low risk was subsequently deleted. 

[59] Future of Life Institute, Working Paper, A Proposal for a Definition of General Purpose Artificial Intelligence Systems available at  https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4238951

[60] See recital 70a of Council of the European Union, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts – Presidency compromise text 2021 available at https://data.consilium.europa.eu/doc/document/ST-14278-2021-INIT/en/pdf

[61] Council of the European Union, Proposition de Règlement du Parlement européen et du Conseil établissant des règles harmonisées concernant l’intelligence artificielle (législation sur l’intelligence artificielle) et modifiant certains actes législatifs de l’Union- Text de compromis de la présidence – Article 3, paragraphe 1 ter, Articles 4 bis à 4 quater, Annexe VI (3) et (4), considérant 12 bis bis. 2022 as cited by Future of Life Institute, Working Paper, A Proposal for a Definition of General Purpose Artificial Intelligence Systems and available online at: https://eur-lex.europa.eu/legal-content/FR/ALL/?uri=CELEX%3A52021PC0206

[62] See Recital 12c of Council of the European Union, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts – Fourth Presidency compromise text October 2022 available at https://artificialintelligenceact.eu/wp-content/uploads/2022/10/AIA-CZ-4th-Proposal-19-Oct-22.pdf and this position was reflected in the text for the Committee of the Permanent Representatives of the Governments of the Member States to the European Union,  issued in November 2022 and available here: https://artificialintelligenceact.eu/wp-content/uploads/2022/11/AIA-CZ-Draft-for-Coreper-3-Nov-22.pdf

[63] See the Working Paper of the Future of Life Institute, entitled, A Proposal for a Definition of General Purpose Artificial Intelligence Systems, available at  https://futureoflife.org/wp-content/uploads/2022/11/SSRN-id4238951-1.pdf

[64] Ibid.

[65] See Article 3 (1) (b) Council of the European Union, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, text for the Committee of the Permanent Representatives of the Governments of the Member States to the European Union,  issued November 2022 and available here: https://artificialintelligenceact.eu/wp-content/uploads/2022/11/AIA-CZ-Draft-for-Coreper-3-Nov-22.pdf

[66] Art. 3.

[67] See the Working Paper of the Future of Life Institute, entitled, A Proposal for a Definition of General Purpose Artificial Intelligence Systems, available at  https://futureoflife.org/wp-content/uploads/2022/11/SSRN-id4238951-1.pdf

[68] See the Working Paper of the Future of Life Institute, entitled, A Proposal for a Definition of General Purpose Artificial Intelligence Systems, available at  https://futureoflife.org/wp-content/uploads/2022/11/SSRN-id4238951-1.pdf

[69] Also on the matter of excluding certain systems from the remit of the proposed Regulation, in October 2022 the European Council published a compromise text which provides an exemption for high risk AI systems in the areas of law enforcement, migration, asylum and border control management, and critical infrastructure from the obligation to register in the EU proposed database. See recital 4.9, 14278/21 Council of the European Union’s proposed compromise text available at https://artificialintelligenceact.eu/wp-content/uploads/2022/10/AIA-CZ-4th-Proposal-19-Oct-22.pdf

[70] Recital 70a, 14278/21 Council of the European Union’s proposed presidency compromise text (November 2021) available at  https://data.consilium.europa.eu/doc/document/ST-14278-2021-INIT/en/pdf where a general purpose AI system is defined as… “able to perform generally applicable functions such as image/speech recognition, audio/video generation, pattern detection, question answering, translation, etc”.

[71] https://futureoflife.org/wp-content/uploads/2022/10/Civil-society-letter-GPAIS-October-2022.pdf

[72] https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-16

[73] https://www.euractiv.com/section/digital/news/the-us-unofficial-position-on-upcoming-eu-artificial-intelligence-rules/

[74] https://www.euractiv.com/section/digital/news/the-us-unofficial-position-on-upcoming-eu-artificial-intelligence-rules/

Emphasis added

[75] Ibid. 

[76] See Title 1A Article 4b, Council of the European Union, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, text for the Committee of the Permanent Representatives of the Governments of the Member States to the European Union,  issued November 2022 and available here: https://artificialintelligenceact.eu/wp-content/uploads/2022/11/AIA-CZ-Draft-for-Coreper-3-Nov-22.pdf

[77] Ibid at Title 1A Article 4c.

[78] See https://www.williamfry.com/newsandinsights/news-article/2022/11/03/industry-impacts-council-of-the-eu-publishes-new-compromise-text-for-the-artificial-intelligence-act

[79] Recital 85

[80] See also Article 19

[81] See discussion below in this chapter

[82] https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473

[83] Emphasis added.

[84] Art 3 states that: “‘importer’ means any natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the Union”;https://ec.europa.eu/commission/presscorner/detail/en/qanda_21_1683

[85] Art 3 states: “‘distributor’ means any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market”

[86] Art 3 states: “‘operator’ means the provider, the product manufacturer, the deployer, the authorised representative, the importer or the distributor”.

[87] Art. 27.

[88] Emphasis added.

[89] https://ec.europa.eu/commission/presscorner/detail/en/qanda_21_1683

[90] https://ec.europa.eu/commission/presscorner/detail/en/qanda_21_1683

[91] Kaminski, Margot E., Regulating the Risks of AI (August 19, 2022). Boston University Law Review, Vol. 103:1347, 2023, U of Colorado Law Legal Studies Research Paper No. 22-21, Available at SSRN: https://ssrn.com/abstract=4195066 or http://dx.doi.org/10.2139/ssrn.4195066

[92] Ibid at 1404. 

[93] Ibid at 1405.

[94] https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473

[95] https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473

[96] Art 50.

[97] See also Recital 44: “

[98] See later in this chapter for discussion

[99] See Chapter III.

[100] Annex III

[101] Annex III

[102] Recital 50

[103] Annex III

[104] Annex III

[105] Annex III

[106] Annex III

[107] Annex III

[108] Art 9.

[109] Recital 69.

[110] Recital 71.

[111] Article 11.

[112] Art 13(2)

[113] Art 14.

[114] Art. 15

[115] https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473

[116] Recital 15.

[117] Annex III

[118] See https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473

[119] Art 99

[120] Art 99

[121] Art 99.

[122] See below in this chapter

[123] See https://digital-strategy.ec.europa.eu/en/policies/ai-office

[124] https://oecd.ai/en/ai-principles

[125] For discussion on subliminal techniques see Franklin, M., Tomei, P., & Gorman, R. (2023). Strengthening the EU AI Act: Defining Key Terms on AI Manipulation. Computing Research Repository, 2023(2308).

[126] Art 6(2a) 

[127]Art 6(2a) 

[128] Art 50.

[129] This is dealt with elsewhere in this chapter

[130] Art 6(2a)

[131] Article 6(1)

[132] Article 6(2)

[133] Article 6(2a)

[134] See this chapter earlier where it looks at Limited Risk

[135] Art. 5.

[136] Art 13(2)

[137] Article 3

[138] Article 3

[139] Article 3

[140] “The term “watermarking” means the act of embedding information, which is typically difficult to remove, into outputs created by AI — including into outputs such as photos, videos, audio clips, or text — for the purposes of verifying the authenticity of the output or the identity or characteristics of its provenance, modifications, or conveyance.” Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence at Art 3.

[141] https://www.edps.europa.eu/_en

[142] Hacker, P. (2023). AI Regulation in Europe: From the AI Act to Future Regulatory Challenges. Computing Research Repository, 2023(2310), due to appear in Ifeoma Ajunwa & Jeremias Adams-Prassl (eds), Oxford Handbook of Algorithmic Governance and the Law, OUP 2024.

[143] See for example the Facial Recognition Technology Bill 2023 and the comment in The Irish Times: “That FRT is largely to be managed through an AI Act tells you much about why FRT should feature highly on the list of the ways in which the EU is tilting alarmingly towards normalising ever greater, more powerful and sneakier methods of mass surveillance.” https://www.irishtimes.com/technology/2024/07/04/eu-is-tilting-alarmingly-towards-normalising-mass-surveillance/

[144] Ibid at 10.

[145] Ibid at 10.

[146] Ibid.

[147] Gikay, Regulating Use by Law Enforcement Authorities of Live Facial Recognition Technology in Public Spaces: An Incremental Approach 2023 Cambridge Law Journal 82(3) p 414 – 449. Accessible at https://www.doi.org/10.1017/S0008197323000454

[148] Ibid at 415

[149] Ibid at 448

[150] Ibid at 448

[151] Emphasis added.

[152] Ballardini, van Genderen and Nokelainen, Legal incentives for innovations in the emotional AI comain: a carrot and stick approach? Journal of Intellectual Property & Practice, 2024, available at https://doi.org/10/1093/jiplp/jpae041

[153] Ibid at 2.

[154] Ibid at 2.

[155] Ibid. 

[156] Ibid

[157] Ibid at 10

[158] https://mistral.ai/news/announcing-mistral-7b/

[159] https://www.silo.ai/blog/europes-open-language-model-poro-a-milestone-for-european-ai-and-low-resource-languages

[160] https://llama.meta.com

[161] https://www.reuters.com/technology/eus-ai-act-could-exclude-open-source-models-regulation-2023-12-07/

[162] Hacker, Philipp, “What’s Missing from the EU AI Act.” (2023) https://verfassungsblog.de/whats-missing-from-the-eu-ai-act/

[163] Ibid

[164] Recital 106

[165] Emphasis added

[166] See in particular Introduction and chapter 1.

[167] See Chapter 1.

[168] List provided by Hacker, P. (2023). AI Regulation in Europe: From the AI Act to Future Regulatory Challenges. Computing Research Repository, 2023(2310) at 10. 

[169] Hacker, P. (2023). AI Regulation in Europe: From the AI Act to Future Regulatory Challenges. Computing Research Repository, 2023(2310) at 10. 

[170] Recitals 99

[171] Recital 107

[172] Art 50. Art 50(1) states: “Providers shall ensure that AI systems intended to directly interact with natural persons are designed and developed in such a way that the concerned natural persons are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of us.” Art 50(3a) “The information referred to (…) shall be provided to the concerned natural persons in a clear and distinguishable manner at the latest at the time of the first interaction or exposure.”

[173] See Recital 70d which refers to Regulation 2022/2065,16(6). Recital136 states: The obligations placed on providers and deployers of certain AI systems in this Regulation to enable the detection and disclosure that the outputs of those systems are artificially generated or manipulated are particularly relevant to facilitate the effective implementation of Regulation (EU) 2022/2065. This applies in particular as regards the obligations of providers of very large online platforms or very large online search engines to identify and mitigate systemic risks that may arise from the dissemination of content that has been artificially generated or manipulated, in particular risk of the actual or foreseeable negative effects on democratic processes, civic discourse and electoral processes, including through disinformation. The requirement to label content generated by AI systems under this Regulation is without prejudice to the obligation in Article 16(6) of Regulation 2022/2065 for providers of hosting services to process notices on illegal content received pursuant to Article 16(1) and should not influence the assessment and the decision on the illegality of the specific content. That assessment should be performed solely with reference to the rules governing the legality of the content.

[174] Recital 102.

[175] E.g. Recital 114  

[176] Art. 51. One source describes this as the equivalent of 2000 billion MacBook M1 chips combined. https://emildai.eu/ai-act-is-finally-approved-an-ultimate-regulation-or-will-we-keep-needing-more-revisions/ It’s worth pointing out that the equivalent position in the original  Executive Order of the United States of America applies to processing power of 10^26 which is far higher. This determination of processing power is that applicable to the model while training. The Financial Times posts a useful chart on the current crop of Artificial Intelligence models that are caught by the respective regulations: https://www.ft.com/content/773eb147-0f38-48f3-a2cc-18166ab8e793

[177] Recital 111.

[178] Art 52 and see Recital 113.

[179] Recital 111.

[180] See also Art 52.

[181] Hacker, Philipp, “What’s Missing from the EU AI Act.” (2023) https://verfassungsblog.de/whats-missing-from-the-eu-ai-act/

[182] https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[183] Recital 114

[184] Recital 114

[185] Recital 114

[186] Recital 114, 115

[187] Recital 114

[188] Recital 115.

[189] Art 52. Emphasis added.

[190] Hacker, P. (2023). AI Regulation in Europe: From the AI Act to Future Regulatory Challenges. Computing Research Repository, 2023(2310) at p. 11.

[191] See Amendment 771 to Annex VIII https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html

[192] Emphasis added.

[193] Emphasis added.

[194] These are foundation models trained with large amount of data and with advanced complexity, capabilities, and performance well above the average, which can disseminate systemic risks along the value chain. See https://www.techpolicy.press/will-disagreement-over-foundation-models-put-the-eu-ai-act-at-risk/# See definition in Art 3: (44c) “‘high-impact capabilities’ in general purpose AI models means capabilities that match or exceed the capabilities recorded in the most advanced general purpose AI models.”

[195] See Art 3 for definition

[196] Buocz, T., Pfotenhauer, S., & Eisenberger, I. (2023). Regulatory sandboxes in the AI Act: reconciling innovation and safety?. Law Innovation and Technology, 15(2), 357-389, at 388.

[197] See earlier in this chapter.

[198] Art 3.

[199] Art 50 (1)

[200] Art 50(3)

[201] Art 50(2)

[202] Art 50 (1a)

[203] The Brussels Effect, Oxford University Press, 2020

[204] Ibid at ix.

[205] See Chapter 5

[206] See chapter 7

[207] See Csernatoni, Raluca. “The EU at the Helm?: Navigating AI Geopolitics and Governance.” Charting the Geopolitics and European Governance of Artificial Intelligence, Carnegie Endowment for International Peace, 2024, pp. 9–15 at p. 9. JSTOR, http://www.jstor.org/stable/resrep58111.6. Accessed 2 June 2024. “For the EU to be at the helm of international AI governance would signify a concerted effort to shape the global norms, standards, and regulations that govern the development and deployment of AI technologies. Such a position would reflect the EU’s ambition to promote a human-centric and trustworthy approach to AI.” 

[208] https://www.politico.eu/article/france-germany-power-grab-kill-eu-blockbuster-ai-artificial-intelligence-bill/#:~:text=Latest%20news-,Power%20grab%20by%20France%2C%20Germany%20and%20Italy%20threatens%20to%20kill,feet%20on%20advanced%20AI%20rules.

[209] https://www.reuters.com/technology/france-had-no-prior-knowledge-microsofts-mistral-ai-deal-official-says-2024-02-28/

[210] https://2021.ai/eu-ai-act-a-comprehensive-timeline-and-preparation-guide/

[211] Smuha, Nathalie A. and Ahmed-Rengers, Emma and Harkens, Adam and Li, Wenlong and MacLaren, James and Piselli, Riccardo and Yeung, Karen, How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act (August 5, 2021). Available at SSRN: https://ssrn.com/abstract=3899991 or http://dx.doi.org/10.2139/ssrn.3899991

[212] “Legally Trustworthy AI (…) requires a regulatory framework which prevents the gravest harms and wrongs generated by AI systems, and appropriately allocates responsibility for the if they do occur, particularly when they violate fundamental rights”. Smuha, Nathalie A. and Ahmed-Rengers, Emma and Harkens, Adam and Li, Wenlong and MacLaren, James and Piselli, Riccardo and Yeung, Karen, How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act (August 5, 2021). Available at SSRN: https://ssrn.com/abstract=3899991 or http://dx.doi.org/10.2139/ssrn.3899991 at 7.

[213] A regulatory framework for Legally Trustworthy AI therefore requires an effective enforcement architecture. Which establishes and protects procedural rights and is internally coherent.” Smuha, Nathalie A. and Ahmed-Rengers, Emma and Harkens, Adam and Li, Wenlong and MacLaren, James and Piselli, Riccardo and Yeung, Karen, How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act (August 5, 2021). Available at SSRN: https://ssrn.com/abstract=3899991 or http://dx.doi.org/10.2139/ssrn.3899991 at 8.

[214] “[D]emocracy requires that the AI systems which are allowed under the Proposal do not undermine the ideals of transparency and accountability, which are both required for meangingful public participation and democratic accountability.” Smuha, Nathalie A. and Ahmed-Rengers, Emma and Harkens, Adam and Li, Wenlong and MacLaren, James and Piselli, Riccardo and Yeung, Karen, How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act (August 5, 2021). Available at SSRN: https://ssrn.com/abstract=3899991 or http://dx.doi.org/10.2139/ssrn.3899991 at 8.

[215] https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1

[216] Smuha, Nathalie A. and Ahmed-Rengers, Emma and Harkens, Adam and Li, Wenlong and MacLaren, James and Piselli, Riccardo and Yeung, Karen, How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act (August 5, 2021). Available at SSRN: https://ssrn.com/abstract=3899991 or http://dx.doi.org/10.2139/ssrn.3899991 at 2.

[217] Smuha, Nathalie A. and Ahmed-Rengers, Emma and Harkens, Adam and Li, Wenlong and MacLaren, James and Piselli, Riccardo and Yeung, Karen, How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act (August 5, 2021). Available at SSRN: https://ssrn.com/abstract=3899991 or http://dx.doi.org/10.2139/ssrn.3899991 at 5.

[218] https://www.euronews.com/next/2024/03/16/eu-ai-act-reaction-tech-experts-say-the-worlds-first-ai-law-is-historic-but-bittersweet

[219] “The threat of AI monopolies came under the limelight last month after it emerged that French start-up Mistral AI was partnering with Microsoft. To some in the EU, it came as a shock since France had pushed for concessions to the AI Act for open source companies like Mistral.” https://www.euronews.com/next/2024/03/16/eu-ai-act-reaction-tech-experts-say-the-worlds-first-ai-law-is-historic-but-bittersweet

[220] https://www.euronews.com/next/2023/12/15/potentially-disastrous-for-innovation-tech-sector-says-eu-ai-act-goes-too-far#:~:text=The%20organisation%20said%20that%20the,of%20AI%20talent%2C%20it%20warned.

[221] Walters, J., Dey, D., Bhaumik, D., & Horsman, S. (2023). Complying with the EU AI Act. Computing Research Repository, 2023(2307) at p 4.

[222] Hacker, Philipp, “What’s Missing from the EU AI Act.” (2023) https://verfassungsblog.de/whats-missing-from-the-eu-ai-act/

[223] https://www.ft.com/content/6cc7847a-2fc5-4df0-b113-a435d6426c81

[224] Ibid.

[225] Hacker, P. (2023). AI Regulation in Europe: From the AI Act to Future Regulatory Challenges. Computing Research Repository, 2023(2310),

[226] See Chapter 10

[227] Asress Adimi Gikay, Risks, innovation, and adaptability in the UK’s incrementalism versus the European Union’s comprehensive artificial intelligence regulation, International Journal of Law and Information Technology, Volume 32, Issue 1, 2024, eaae013, https://doi.org/10.1093/ijlit/eaae013

Chapter 10

The Path to regulate Artificial Intelligence in Brazil

As AI moves to fundamentally and profoundly transform society, Brazil, as a voice from the Global South, contributes its perspectives to a human-centred, inclusive, development-oriented, responsible and ethical approach to AI, with the fundamental aim of improving people’s lives and bridging the digital divide.[1]

Introduction

This chapter looks at the approach to Artificial Intelligence regulation Brazil. Unlike the position in the United States of America and in the EU there has been no adopted regulation on the matter in Brazil – at least not yet. Still, Brazil was very quickly out of the traps in terms of its review and response to Artificial Intelligence. This chapter will look at some of those early-phase responses and at the initial attempts to adopt legislation on the matter. It will then look at more recent developments and will show that, following a review, the primordial legislative position is now comparable to that found in the EU – an example of the so-called “Brussels Effect”[2]the effects of which can already been found  across several different regulatory areas.[3]

Background

Brazil is the fifth largest social media market in the world with social networking audiences anticipated to grow to 188 million by 2027.[4] Unsurprisingly, with the largest population in Latin America, at 216 million, Brazil is also the largest online audience on that continent. Brazil has recently asserted itself globally on important issues of international concern, which show its willingness to engage in transnational issues both of concern to it and to other countries: such issues include not just the protection of the Amazon rainforest which falls within its own jurisdiction, but, also, on the wider issue of leadership in the green economy: 92 per cent of Brazilian electricity comes from renewable sources[5], for example. On the issue of Artificial Intelligence, Brazil has actively engaged in international discussion pertaining to best practices in AI and has done so at a very early stage in the international dialogue on the subject.[6]

Artificial Intelligence Regulation in Brazil

The OECD records that Brazil demonstrated early-stage engagement and dialogue on the issue of Artificial Intelligence regulation as early as 2019. In that year the Federal Brazilian Government engaged in a consultation exercise which began in December 2019 and ended three months later inviting stakeholders across business, academia, civil society and the technical community to make contributions. Overall more than 500 participants were recorded as having taken part[7] with the objective of policy objective formulation and policy design. The Brazilian EBIA[8], or AI strategy, was set up in 2021, around the same time the European Commission published its proposal to regulate Artificial Intelligence in the European Union.[9]  The EBIA is based on five principles defined by the OECD for responsible management of AI systems, namely: (i) inclusive growth, sustainable development and well-being; (ii) human-centred values and fairness; (iii) transparency and explainability; (iv) robustness, security and safety; and (v) accountability. Brazil has made strides to improve its digital ecosystem to incentivise AI innovation balancing with regulatory measures.[10]

According to the OECD:

“The EBIA is part of a series of technology-related initiatives that have been implemented in Brazil during the past years, including the Brazilian Strategy for Digital Transformation (E-Digital), and General Data Protection Law (LGPD), amongst others. After two years of a public consultation that gathered around 1000 contributions, as well as a consultancy in AI hired by the Federal Government, the Brazilian AI Strategy was released. It was the first federal strategy which focused specifically on AI, and it intends to be the main framework directing other initiatives and strategies to be released in this topic in the near future.”[11]

The Brazilian Government has indicated its commitment to the “equitable and sustainable distribution of AI’s benefits across society”.[12] According to the OECD many AI initiatives were commenced and supported by the Brazilian government focusing on 9 axes as follows: 

  1. Legislation, regulation and ethical use;
  2. AI governance;
  3. International aspects;
  4. Qualifications for a digital future (Education);
  5. Workforce and training;
  6. Research, Development, Innovation and Entrepreneurship;
  7. Application in the productive sectors;
  8. Application in the public sector;
  9. Public security.

Overall the EBIA presents 73 strategic actions across broad-based areas: legislation, governance and international aspects, applying to a number of specifically identified areas: education, workforce and training, Research, Development, Industry and Entrepreneurship, application in the productive sectors, application in the public sector, and public security.[13]

Since the initiation of EBIA Brazil has established 6 applied centres for AI called CPA in the areas of smart cities, agriculture, industry 4.0 and health. There had been already pre-existing AI-focused establishments including the Centre for AI called C4AI and the Brazilian Association of Research and Industrial Innovation Network of Digitial Technologies and Innovation known as Embrapii’s Network. The objective of the network is to “leverage the productive capacity and competitiveness of Brazilian companies, encouraging the use and development of frontier technology in the industrial production process, based on AI”.[14] The EBIA also affords grants for startups, and establishes education programmes which aim to upskill the current workforce from elementary to postgraduate level.[15]

“With these initiatives underway, Brazil is strengthening its position in AI technology to face national challenges, such as strengthening the skills of its critical mass, in terms of human and physical capabilities, to fully and competitively embrace AI-enabled transformation.”[16]  

Since 2019 several bills have circulated in the Brazilian National Congress to regulate AI systems.[17] Bills nº 5.051/2019 and nº 872/2021 were laid before the Chamber of Deputies. Bill nº 21/2020 was laid before the Federal Senate. Brazil has a bicameral legislature, and the Bills may be laid before any of the houses. The house before which the Bill is laid is the initiating house, and the other works as the revising house.

At the beginning of 2022 the Chamber of Deputies approved and sent to the Senate Bill nº 21/2020 where that House decided to compose a Commission of Legal Experts to prepare an alternative Bill on Artificial Intelligence. The Commission is called CJSUBIA and comprises experts with recognised expertise in technology law and regulation. A series of public hearings were organised in April and May of 2022 which brought together more than 50 specialists from different groups including public authorities, the business sector, civil society, and the scientific-academic community. 

The hearings comprised of four main axes: (i) concepts, understanding, and classification of artificial intelligence; (ii) impacts of artificial intelligence; (iii) rights and duties; (iv) accountability, governance, and supervision.

In June 2022 an international seminar was organised to understand the international position on best practice outcomes for this area. A period of research collaboration followed which had regard to similar regulatory efforts in other jurisdictions. 

In December 2022, the Commission published a report which was 900 pages in length and which included a draft alternative bill – Bill No. 2.338/2023.[18] It was initiated by Senator Rodrigo Pacheco (PSD/MG) and lay for a time with the Temporary Internal Commission on Artificial Intelligence in Brazil[19] comprising a representative base of Senators. This Commission discussed drafts, and held 14 public hearings.[20] In December 2024 it approved a watered-down version of the Bill which was immediately passed by the Senate. In keeping with the original version of the Bill the legislation as passed by the Senate retains a risk-based regulatory model which imposes obligations on developers, distributors and applicators of high-risk systems. Risk-assessments that address biases and potential discrimination are required – in a move that models the EU AI Act. The regulation also classifies certain activity as high-risk: traffic control, student admissions, hiring and promoting employees and border and immigration control. The main features of the Bill that were dropped include the classification of certain algorithms for social media as high-risk.[21] The Bill is currently subject to review by the Chamber of Deputies.[22]

Movement Towards the European Union position

Tracing the development of the legislative initiatives in Brazil clearly shows the Brussels Effect in action. As originally conceived Bill No. 21 of 2020,[23] until the establishment of Bill No 2.338/2023, the primordial Bill in Brazil, was drafted without mention of a risk classification system for Artificial Intelligence systems, but it did mention a risk-based management, approach. In that Bill an AI system was defined interestingly as “a system based on a computable process that, from a set of goals defined by humans, can, through data and information processing, learn to perceive and interpret the external environment. As well as interact with it, make predictions, recommendations, categorisations, or decisions, and utilizing, but not limited to, techniques such as: machine learning systems, including supervised, unsupervised, and reinforcement learning; systems based on knowledge or logic; statistical approaches, Bayesian inference, research and optimization methods.”[24] Risk-based management was mentioned in Art 6, where it stated: “the development and usage of artificial intelligence systems shall consider the specific risks and definitions of the need to regulate artificial intelligence systems, and the respective degree of intervention shall always we proportional to the specific risks offered by each system and the probability of occurrence of these risks..” The foundations for Artificial Intelligence Regulation in Brazil were defined as including “the encouragement of self-regulation, through the adoption of codes of conduct and guides to good practices, observing… good global practices.”[25]

Subsequently, Bill No 2.338/2023 proposed[26] a different definition for Artificial Intelligence which reads[27] as follows:

“System of Artificial Intelligence: a computational system, with different degrees of autonomy, designed to interpret or achieve, data in line with objectives, utilised approaches based on machine learning and/or logic and representation of understanding, through input data coming from machines or humans, with the objective of producing predictions, recommendations, or decisions that could influence either the virtual or the real world.”

The other notable provisions of the draft Bill will now be considered. 

The Bill commences with Article 1 which states that the Bill establishes the general norms and national character for the dissemination, implementation and responsible use of systems of Artificial Intelligence (AI) in Brazil with the objective of protecting fundamental rights and to guarantee the implementation of secure and confidential systems that benefit mankind and democracy and the development of sciences and technology. 

Article 2 continues in a similar light and sets down fundamentals for the development of such systems:

            The importance of humankind

            Respect for human rights and the values of democracy

            The free development of personality

            Protection of the environment and sustainable development

            Equality, non-discrimination, plurality, and respect for the rights of workers

            The development of technology and innovation

            The defence of the consumer

            Privacy and the protection of data and informational self-determination

Access to information, and education, and a conscious awareness of the systems of artificial intelligence and their application

Article 3 asks that the development and implementation and use of systems of Artificial Intelligence are carried out in good-faith including within the principles of inclusivity, sustainable development and well-being. Self-determination and freedom of decision-making and schooling is also set down as well as non-discrimination, justice, equality and inclusion, transparency, intelligibility, robustness of systems and security of information.

Article 4 provides definitions. It defines Artificial Intelligence as follows:

“a computational system, with different degrees of autonomy, designed to infer how to achieve a given set of objectives, using approaches based on machine learning and/or logic and knowledge representation, through input data from machines or humans, with the aim of producing predictions, recommendations or decisions that may influence the virtual or real environment”

It distinguishes between a provider, on the one hand, and, an operator on the other:

“artificial intelligence system provider: a natural or legal person, of a public or private nature, who develops an artificial intelligence system, directly or on demand, with a view to placing it on the market or applying it in a service provided by it, under its own name or brand, for consideration or free of charge” 

“artificial intelligence system operator: a natural or legal person, of a public or private nature, who employs or uses, on his behalf or for his benefit, an artificial intelligence system, unless such system is used within the scope of a personal activity of a non-professional nature”. 

Article 5 gives rights to persons affected by systems of Artificial Intelligence including the rights: to be provided with information in respect of the persons interactions with the artificial intelligence system, to be provided with explanations about the decision, or recommendation, or forecast that has been taken  by the AI system, to contest decisions made by the AI system, to participate in humane decisions of the AI, to non-discrimination, to privacy and the protection of personal data.

Article 6 provides that the rights detailed in the Bill may be exercised before a competent administrative body, as well as before the court, either individually, or collectively, in accordance with extant legislation on individual, collective and “diffuse remedies”. 

Article 7 provides a right to persons affected by a system of Artificial Intelligence to receive a summary of their interaction with the system, clear and information in various respects including as to the following: description of the system, the type of decisions, recommendations and forecasts that the system makes and the consequences of the utilisation of the system to the person and categories of the personal data utilised in the context of the functioning of the AI system. 

Article 8 provides a right to a person affected by an AI system to solicit an explanation about the decision, a preview of the recommendation, with information in respect of the criteria and procedures utilised which should include the rationality and logic of the system, the significance and consequences forecast for the type of decision that impacts the individual concerned, the degree or level of contribution of the AI system in making the decision, the data processed and criteria for taking the decision, the mechanism for the effected person to contest the decision, and the possibility to solicitor human intervention within the terms of the law set down.

Article 9 extends expression of the right affected person to contest the decision and Article 10 refers to specific juridical decisions. Article 11 sets down that in cases where the decision to be taken have irreversible impact, or are difficult to reverse, or involve decisions that could generate risk to life or the physical integrity of an individual such decision should have the appropriate level of human input in respect of the final decision made.

Article 10 deals with the right to human review. It states:

“When the decision, prediction or recommendation of an artificial intelligence system produces relevant legal effects or that significantly impact the interests of the person, including through the generation of profiles and the making of inferences, the latter may request human intervention or review. (…) Human intervention or review will not be required if its implementation is proven to be impossible, in which case the person responsible for the operation of the artificial intelligence system will implement effective alternative measures, in order to ensure the reanalysis of the contested decision, taking into account the arguments raised by the affected person, as well as the reparation of any damage generated.” 

Article 11 provides for significant human involvement in certain cases: “in scenarios in which decisions, predictions or recommendations generated by artificial intelligence systems have an irreversible impact or are difficult to reverse or involve decisions that may generate risks to the life or physical integrity of individuals, there will be significant human involvement in the decision-making process and final human determination.” 

Article 12 speaks of the right to receive fair treatment in respect of the implementation and use of the system of Artificial Intelligence. 

Chapter III of the Bill, Articles 13 to 18, deals with categorising the risk. Bill No. 2.338/2023, clearly mirroring the EU position, presents a risk classification structure. Chapter III is entitled “Classification of Risk” and refers to a “preliminary evaluation”[28]  and states:

“todo sistema de inteligência artificial passará por avaliação preliminar realizada pelo fornecedor para classificação de seu grau de risco 

“Every system of Artificial Intelligence shall pass through a preliminary evaluation to establish the classification of its degree of risk.”

The legislation also affords the opportunity to reclassify the risk by a competent authority.[29] Risk categorised as: high is caught by the strictest provisions of the regulation.  

Prior to its placement on the market or use in service, every artificial intelligence system must undergo a preliminary assessment carried out by the supplier to classify its degree of risk (Article 13).

Mirroring provisions in the European Union that Article also states that: 

Again, Article 14 mirrors position in the European Union in respect of prohibition of certain Artificial Intelligence systems:

Art. 14. The implementation and use of artificial intelligence systems is prohibited: 

I – that employ subliminal techniques that have the objective or effect of inducing the natural person to behave in a way that is harmful or dangerous to his or her health or safety or against the foundations of this Law; 

II – that exploit any vulnerabilities of specific groups of natural persons, such as those associated with their age or physical or mental disability, in order to induce them to behave in a way that is harmful to their health or safety or against the foundations of this Law; 

III – by the public authorities, to evaluate, classify or rank natural persons, based on their social behaviour or personality attributes, by means of universal scoring, for access to goods and services and public policies, in an illegitimate or disproportionate manner.” 

Article 15 addresses the use of Biometric identification systems and states that “within the scope of public security activities, the use of biometric identification systems at a distance, on a continuous basis in spaces accessible to the public, is only allowed when there is a provision in specific federal law and judicial authorization in connection with the individualized criminal prosecution activity” in respect of the following:

I – prosecution of crimes punishable by a maximum sentence of imprisonment of more than two years; 

II – search for victims of crimes or missing persons; or 

III – crime in flagrante delicto. 

Article 16 states that it will be the responsibility of the competent authority to regulate excessively risky artificial intelligence systems. 

High risk systems are those designated for the use for the following purposes: 

I – application as safety devices in the management and operation of critical infrastructures, such as traffic control and water supply and electricity networks; 

II – vocational education and training, including systems for determining access to educational or vocational training institutions or for the assessment and monitoring of students; 

III – recruitment, screening, filtering, evaluation of candidates, decision-making on promotions or termination of contractual employment relationships, division of tasks and control and evaluation of the performance and behaviour of people affected by such applications of artificial intelligence in the areas of employment, worker management and access to self-employment; 

IV – evaluation of criteria for access, eligibility, concession, review, reduction or revocation of private and public services that are considered essential, including systems used to assess the eligibility of natural persons for the provision of public assistance and security services; 

V – assessment of the indebtedness capacity of individuals or establishment of their credit rating; 

VI – dispatch or prioritization of emergency response services, including firefighters and medical assistance; 

VII – administration of justice, including systems that assist judicial authorities in the investigation of facts and in the application of the law; 

VIII – autonomous vehicles, when their use may generate risks to the physical integrity of people; 

IX – applications in the health area, including those intended to assist in medical diagnoses and procedures; 

X – biometric identification systems; 

XI – criminal investigation and public security, in particular for individual risk assessments by competent authorities in order to determine the risk of a person committing offences or reoffending, or the risk to potential victims of criminal offences or to assess the personality traits and characteristics or past criminal behaviour of natural persons or groups; 

XII – analytical study of crimes related to natural persons, allowing law enforcement authorities to search large sets of complex data, related or unrelated, available in different data sources or in different data formats, in order to identify unknown patterns or discover hidden relationships in the data; 

XIII – investigation by administrative authorities to assess the credibility of evidence in the course of the investigation or prosecution of offences, to predict the occurrence or recurrence of an actual or potential offence on the basis of the profiling of natural persons; or 

XIV – migration management and border control. 

Article 18 provides that the competent authority can update the list of excessive or high-risk artificial intelligence systems.

Chapter IV of the Bill, Article 19 to 26, sets down issues around the governance of the systems of Artificial Intelligence. 

Article 19 covers the precepts of the governance structures.  Article 20 provides that operators or providers (“agents”) of high-risk systems shall adopt specific governance measures and internal processes: 

I – documentation, in the format appropriate to the development process and the technology used, regarding the operation of the system and the decisions involved in its construction, implementation and use, considering all relevant stages in the life cycle of the system, such as the design, development, evaluation, operation and discontinuation stages of the system; 

II – use of tools for automatic recording of the system’s operation, in order to allow the evaluation of its accuracy and robustness and to ascertain discriminatory potentials, and implementation of the risk mitigation measures adopted, with special attention to adverse effects; 

III – conducting tests to evaluate appropriate levels of reliability, according to the sector and the type of application of the artificial intelligence system, including robustness, accuracy, precision and coverage tests; 

IV – data management measures to mitigate and prevent discriminatory biases, including: 

a) evaluation of the data with appropriate measures to control human cognitive biases that may affect the collection and organization of data and to avoid the generation of biases due to problems in classification, failures or lack of information in relation to affected groups, lack of coverage or distortions in representativeness, according to the intended application, as well as corrective measures to avoid the incorporation of structural social biases that may be perpetuated and amplified by the technology; and 

b) composition of an inclusive team responsible for the design and development of the system, guided by the search for diversity. 

V – adoption of technical measures to enable the explainability of the results of artificial intelligence systems and measures to provide operators and potential impacted parties with general information on the operation of the artificial intelligence model employed, explaining the logic and criteria relevant to the production of results, as well as, upon request of the interested party, providing adequate information that allows the interpretation of the results concretely produced, respecting industrial and commercial secrecy. 

Public authorities when hiring, developing or using artificial intelligence systems considered to be of high risk, are required to adhere to specific measures, (Article 21) 

Article 22 provides for an “algorithmic impact assessment of artificial intelligence systems” which is a requirement of artificial intelligence agents, of high risk Artificial Intelligence systems. 

Article 23 states that such an assessment will be carried out by professional/s with the technical, scientific and legal knowledge necessary to carry out the report and with functional independence. An evaluation methodology is set out (Article 24)

Article 25 states that the algorithmic impact assessment “will consist of a continuous iterative process, carried out throughout the entire life cycle of high-risk artificial intelligence systems, requiring periodic updates. “

Article 26 provides for the conclusions of the impact assessment to be made public after commercially sensitive material has been protected.  

Chapter V refers to issues of civil liability. Article 27[30] states that when a person has involvement with a system of Artificial Intelligence of high risk, or excessive risk, the supplier or operator must respond proportionately to the question of damages that arise in accordance with its level of participation in the damage. When, however, the individual interacts with a system of Artificial Intelligence which is not of high risk the culpability of the agent that caused the damage is presumed, and reverses the onus of proof in favour of the victim. 

Chapter VI sets out good practice and governance codes. Chapter VII covers the area of communication of serious incidents. 

Article 31 states that:

“Artificial intelligence agents shall report to the competent authority the occurrence of serious security incidents, including when there is a risk to the life and physical integrity of persons, the interruption of the operation of critical infrastructure operations, serious damage to property or the environment, as well as serious violations of fundamental rights, in accordance with the Regulation.”

Chapter VIII deals with the fiscalisation and supervisory aspects of Artificial Intelligence. Section I deals with the competent authority. Section II deals with administrative sanctions.

The competent authority is provided for in Article 32 which states that “the Executive Branch shall designate a competent authority to ensure the implementation and supervision of this Law.” 

Articles 33, Article 34 and Article 35 contain other provisions in respect of the function of the competent authority. Article 36 contains provisions on fines stating that the competent authority can issue a warning; a simple fine limited to R$ 50,000,000.00 (fifty million reais) or up to 2% (two percent) of its revenue, of its group or conglomerate in Brazil in its last fiscal year, excluding taxes. 

Similar to the position in the EU Articles 38, 39, 40 provide for the use of regulatory sandboxes. Article 40 states that the competent authority  “shall issue regulations to establish the procedures for requesting and authorizing the operation of  regulatory sandboxes, and may limit or interrupt their operation, as well as issue recommendations, taking into account, among other aspects, the preservation of fundamental rights, the rights of potentially affected consumers and the security and protection of personal data that are subject to processing.” 

Article 41 states that participants in the AI regulatory sandbox “remain liable in accordance with applicable liability law for any harm inflicted on third parties as a result of the experimentation taking place in the sandbox.” 

Article 42 provides for an exception to copyright infringement stating:

The automated use of works, such as extraction, reproduction, storage and transformation, in data and text mining processes in artificial intelligence systems, in activities carried out by research and journalism organizations and institutions, and by museums, archives and libraries, does not constitute an infringement of copyright, provided that: 

I – does not have as its objective the simple reproduction, exhibition or dissemination of the original work itself; 

II – the use occurs to the extent necessary for the purpose to be achieved; 

III – does not unjustifiably harm the economic interests of the holders; and 

IV – does not compete with the normal exploitation of the works. 

Article 43 provides for a publicly accessible database for high-risk systems containing the public documents of the impact assessments with commercially sensitive information removed.

Chapter IX sets out final provisions on aspects of law such as non-exclusion of other provisions set down in law (Article 44).

Comment

The proposed Brazilian enactment as passed by the Senate retains its risk-classification system and how it classifies risk is comparable to the equivalent position in the European Union AI Act.[31] Comparing the text of the EU AI law[32] it will be seen that both versions indicate the adoption of a system of risk classification – and, interestingly, both adopt a risk management system.[33] There are also similarities in how fines are levied, and the use of regulatory sandboxes. The risk classification system was not present in the other antecedent bills on the subject in Brazil and so, cautiously, we can cite this as an example of Bradford’s the Brussels Effect[34] – the influence of European Union regulatory positions on the equivalent position in jurisdictions around the world in certain regulatory areas. Bradford gives as examples market competition, digital economy, consumer health and safety and environment. To this list, as already mentioned in Chapter 5, we can give a cautious welcome to the EU position on Artificial Intelligence. There may be other jurisdictions which follow the EU approach though it’s worth noting that the comparable provisions in the United States of America[35] and China[36] envisage setting global standards too.  

In the Brazilian legislation every AI system must implement a governance structure which includes transparency and security measures, set out in Chapter 4 of the Bill. High-risk AI systems must also include: (i) technical documentation with several characteristics of the system; (ii) log registers; (iii) reliability tests; (iv) discriminatory biases mitigation measures; and (v) technical explainability measures.[37]


The European Union position is set out in Article 9 of the original law as proposed by the Commission and states:

“A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.”

While the Brazilian legislature are looking at adopting a risk classification which refers to excessive risk or high risk, with every AI system subject to a preliminary evaluation, the European Union position uses, effectively, three distinct risk classifications: Unacceptable risk, High risk, Limited risk.[38]

Overall the proposed Bill in Brazil is forward-looking and concerned with the rights of individuals, with mention to of the rights of workers. Interestingly it deals with the issue of civil liability and creates a distinction between the liability in those cases where the system is a high risk and those systems which make decisions which are not high risk. In respect of the latter the burden of proof is reversed in favour of the victim. In respect of the former the supplier, or provider, of the Artificial Intelligence system is liable in damages to the extent of its involvement.

The aspects set down in Article 2 are also laudable. They put forward a positive, progressive position on Artificial Intelligence in Brazil which both embraces the technology but also sets its parameters: referring to concepts like the free development of personality, protection of consumer, access to information, protection of environment, equality, non-discrimination, plurality and respect for the rights of workers.  

The Bill is also notable in respect of its redress mechanisms. Numerous provisions refer to the rights of the person impacted to redress: to solicit an explanation or to contest a decision. There are transparency obligations too. 

Overall the Brazilian approach presents an impressive array of provisions across the sweep of the technology which are fundamentally based in the rights of the individual. The legislature has clearly gone to great lengths to define issues it anticipates will be important and to set down clear governing provisions for a variety of scenarios. It also gives a clear indication that while Artificial Intelligence systems are acceptable there will be times when human intervention is necessary.

Of course interpretation from the Court in respect of various aspects can still be anticipated – and this is appropriate as market conditions could change in unanticipated ways. The interpretation of Article 27 on civil liability is a case in point as there is a clear distinction made between those systems which apply a high risk consequence, where proportional damages arise, and those which do not involve a high risk, and where a reverse burden in favour of the victim arises. The risk classification system set down in Chapter III will likely link in to the application of Article 27. Final Congressional approval and enactment of the Brazilian draft law might take a few more years.[39]


[1] https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[2] See Bradford, The Brussels Effect, 2020.

[3] Examples given in The Brussels Effect include: market competition, the digital economy, and the environment. To this list can now, evidently, be added the field of Artificial Intelligence. 

[4] https://www.statista.com/topics/6949/social-media-usage-in-brazil/#editorsPicks

[5] https://www.ft.com/content/fda15a48-b6ab-44fe-9bc0-1127feedaa80

[6] https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[7] https://oecd.ai/en/dashboards/policy-initiatives/http:%2F%2Faipo.oecd.org%2F2021-data-policyInitiatives-27104

[8] Estratégia Brasileira de Inteligência Artificial

[9] https://artificialintelligenceact.eu/developments/

[10] https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[11] https://oecd.ai/en/dashboards/policy-initiatives/http:%2F%2Faipo.oecd.org%2F2021-data-policyInitiatives-27104

[12] https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[13] https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[14] https://oecd.ai/en/dashboards/policy-initiatives/http:%2F%2Faipo.oecd.org%2F2021-data-policyInitiatives-27344

[15] Some of the fields of knowledges promoted by the programme are data science, cybersecurity, the Internet of Things, cloud computing and robotics. https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[16] https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[17] Bill No 5.051/2019; Bill No 872/2021; and Bill No 21/2020

[18] https://www25.senado.leg.br/web/atividade/materias/-/materia/157233

[19] https://legis.senado.leg.br/comissoes/comissao?codcol=2629

[20] https://www12.senado.leg.br/noticias/materias/2024/12/10/senado-aprova-regulamentacao-da-inteligencia-artificial-texto-vai-a-camara

[21] A versão aprovada nesta terça-feira manteve fora da lista de sistemas considerados de alto risco os algoritmos das redes sociais — decisão que atendeu a pedidos dos senadores oposicionistas Marcos Rogério (PL-RO), Izalci Lucas (PL-DF) e Mecias de Jesus (Republicanos-RR) e que provocou o lamento de alguns parlamentares governistas. Fonte: Agência Senado (https://www12.senado.leg.br/noticias/materias/2024/12/10/senado-aprova-regulamentacao-da-inteligencia-artificial-texto-vai-a-camara)

[22] https://www12.senado.leg.br/noticias/materias/2024/12/10/senado-aprova-regulamentacao-da-inteligencia-artificial-texto-vai-a-camara

[23] https://www.derechosdigitales.org/wp-content/uploads/Brazil-Bill-Law-of-No-21-of-2020-EN.pdf

[24] Art 2. 

[25] Art 4.

[26] https://legis.senado.leg.br/sdleg-getter/documento?dm=9347593&ts=1698248944489&disposition=inline&_gl=1*1oqxom7*_ga*MTMxOTQ1Njg5NC4xNjk4NzU3MjQ1*_ga_CW3ZH25XMK*MTY5ODc1NzI0NC4xLjEuMTY5ODc1NzMwMy4wLjAuMA..

[27] I – sistema de inteligência artificial: sistema computacional, com graus diferentes de autonomia, desenhado para inferir como atingir um dado conjunto de objetivos, utilizando abordagens baseadas em aprendizagem de máquina e/ou lógica e representação do conhecimento, por meio de dados de entrada provenientes de máquinas ou humanos, com o objetivo de produzir previsões, recomendações ou decisões que possam influenciar o ambiente virtual ou real;

[28] Section 1 of Chapter III.

[29] Article 13. Competent Authority defined in Article 4. 

[30] § 1o Quando se tratar de sistema de inteligência artificial de alto risco ou de risco excessivo, o fornecedor ou operador respondem objetivamente pelos danos causados, na medida de sua participação no dano. 

§ 2o Quando não se tratar de sistema de inteligência artificial de alto risco, a culpa do agente causador do dano será presumida, aplicando-se a inversão do ônus da prova em favor da vítima. 

[31] See Tito Rendas, Ivar Hartmann, From Brussels to Brasília: How the EU AI Act Could Inspire Brazil’s Generative AI Copyright Policy, GRUR International, 2024;, ikae027, https://doi.org/10.1093/grurint/ikae027

[32] https://artificialintelligenceact.eu/the-act/

[33] See Article 9.

[34] Bradford, The Brussels Effect, (OUP) 2020. 

[35] See https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

[36] See https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm art 6.

[37] https://oecd.ai/en/wonk/brazils-path-to-responsible-ai

[38] See 5.2.2 of Explanatory Memorandum to the initial European Commission proposal for an AI Act which refers to Unacceptable, High Risk, and no risk. https://artificialintelligenceact.eu/wp-content/uploads/2022/05/AIA-COM-Proposal-21-April-21.pdf The European Parliament subsequently adopted “Limited Risk” https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[39] Tito Rendas, Ivar Hartmann, From Brussels to Brasília: How the EU AI Act Could Inspire Brazil’s Generative AI Copyright Policy, GRUR International, 2024;, ikae027, https://doi.org/10.1093/grurint/ikae027

Chapter 11

The Method of Enforcement in China

“It is imperative for the United States to lead and shape the rules governing such a transformative technology and not permit China to lead on innovation or write the rules of the road”[1] United States of America Senate Majority Leader Chuck Schumer

Introduction

Timeline

China entered the regulatory environment for AI regulation early with a regulatory castle contemplated in 2016 when it considered the issue of Cybersecurity Law.[2] In 2017 the State Council issued a new generation AI development Plan focusing on encouraging AI development and laying out a timetable for AI governance regulations until 2030.[3] In 2019 the National New Generation AI Governance Expert Committee issued a document[4] setting down eight principles for AI governance. In 2021 China issued a regulation on recommendation algorithms,[5] which create new requirements for how algorithms are built and deployed as well as disclosure rules to Government and the public. In 2022 it issued rules for deep synthesis (synthetically generated content)[6] and in 2023 issued interim measures on generative AI systems like GPT 4.[7] Information control is described as central to each of these measures but the various regulations also contain many other key provisions. The rules for recommendation algorithms protect the rights of workers subject to algorithmic scheduling. The deep synthesis regulations requires labelling on synthetically generated content. The 2023 draft generative AI regulations requires the training data and user directed outputs to be “true and accurate”. United States of America Senate Majority Leader Chuck Schumer cited China’s release of its own approach to regulating AI, as “a wake-up call to the nation”, saying it “is imperative for the United States to lead and shape the rules governing such a transformative technology and not permit China to lead on innovation or write the rules of the road.”[8]

All three regulations require developers to engage with a newly built government repository called the Algorithm Registry. This unit gathers information on how algorithms are trained and requires then to pass a security self-assessment. Furthermore, China is preparing to draft a national AI law in the years ahead[9] “on the scale of the European Union’s (…) AI Act.”[10]

Combined together one observer, the Carnegie Endowment for International Peace said as follows:

“Beijing is leading the way in AI regulation, releasing groundbreaking new strategies to govern algorithms, chatbots, and more. Global partners need a better understanding of what, exactly, this regulation entails, what it says about China’s AI priorities, and what lessons other AI regulators can learn.”[11]

Paul Triolo, a senior fellow at the Paulson Institute, states:

“Clearly Beijing now desires to set the rules of the digital economy, in China, and perhaps eventually beyond. Chinese regulators are now feeling their oats across the breadth of the digital economy. They are venturing into areas, such as recommendation algorithms, that were not previously considered when the initial pillars of the current regulatory castle were conceived in 2016 during deliberations around the Cybersecurity Law.[12]

The Cyberspace Administration of China (CAC) is described as the clear bureaucratic leader in governance issues around Artificial Intelligence. The ministry of Science and Technology is another influential entity. There are also numerous think tanks including the China Academy for Information Communications Technology and Tsinghua University’s Institute for AI International Governance. 

Interim Generative AI Regulations 2023[13]

The Order constitutes interim measures for the administration of generated artificial intelligence and refers in Article 1 to the promotion of healthy development and standardized application of such technology with a view to: safeguarding national security, social and public interest and the protection of the legitimate rights and interests of citizens.

Article 2 makes a distinction between generative artificial intelligence technology and generative artificial intelligence services. The former are not covered by the measures in question where they are developed and applied by industry, enterprise, educational and scientific research institutions provided they do not provide generative artificial intelligence services. The latter are defined as the use of generative artificial intelligence technology to provide the public with the service of generating text, pictures, video and other content.

Article 3 specifically sets out the balancing act between development and security and with promoting innovation, taking what are described “effective measures” to encourage the innovative development of generative artificial intelligence. Article 4 provides that the provision and use of the services set out in Article 2 shall adhere to the core values of socialism and shall not incite subversion of state power, overthrow the socialist system, endanger national security and interests, damage the image of the country, incite the secession of the country, undermine national unity and social stability, promote terrorism, extremism, national hatred, ethnic discrimination, violence, Obscene pornography, false and harmful information and other content prohibited by laws and administrative regulations. That article also specifies that in the process of design of the service and its training it shall take effective measures to protect ethnicity, belief, country, region, gender, age, occupation, health and other discrimination. It shall also respect intellectual property rights and business ethics, keep business secrets, and not to cause a monopoly or unfair competition. It shall respect the legitimate rights and interests of others, and shall not endanger the physical and mental health of others, and shall not infringe upon the rights and interests of others’ portrait, reputation, honour, privacy and personal information (Article 4(4). The article also set down an obligation to take effective measures to improve transparency of generated artificial intelligence services and improve accuracy and reliability of the generated content. 

These opening provisions, all contained in Chapter 1 of the order, are both wide-ranging and specific: wide-ranging in they cover the broad range of scope of the services defined in the terms of the order and specific in they set out not just the applicable rights of citizens but the rights of the socialist government, including those which refer to the technology not to be used to “overthrow the socialist system”.

Chapter II deals with development of the technology. Article 5 seeks to encourage innovation and application of the technology across various industries and fields with the end in mind of generating positive, healthy and upward high quality content and the building of an application ecosystem. 

Article 6 encourages the independent innovation of different spokes of the technology such as algorithms, frameworks, chips and supporting software platforms. It also encourages international exchange and cooperation and, interestingly, the participation in the formulation of international rules related to the technology. Clearly China is hopeful that its rules in this space will be adopted more widely abroad, or, will otherwise have an effect on international markets outside of China. 

Article 7 speaks to service providers and mandates them to carry out pre-training, optimisation training and other training data processing activates in accordance with law and to abide by the following:

(1) Use data and basic models from legitimate sources;

(2) If intellectual property rights are involved, it shall not infringe on the intellectual property rights enjoyed by others according to law;

(3) If personal information is involved, personal consent shall be obtained or other circumstances in accordance with the provisions of laws and administrative regulations;

(4) Take effective measures to improve the quality of training data and enhance the authenticity, accuracy, objectivity and diversity of training data;

(5) Other relevant provisions of laws and administrative regulations such as the Cyber Security Law of the People’s Republic of China, the Data Security Law of the People’s Republic of China, the Personal Information Protection Law of the People’s Republic of China, and the relevant regulatory requirements of relevant competent departments.

Article 8 refers to data annotation and mandates the use of “clear, specific and operable labelling rules”, as well as carrying out data annotation quality evaluation, and sample and verify the accuracy of the annotation content. This provision is important as it points to the question of labelling of content, sometimes referred to as watermarking and it’s an issue for systems of this type for the reasons already mentioned earlier in this book in chapter 3. Ultimately, it’s in our interests to have this content labelled, for reasons of the accountability, or lack thereof of the technology, and the importance of continuing to distinguish between human-led content and that generated by a statistical model.  Chapter III deals with service specification and Article 9 sets down clearly that the provider of the service “shall bear the responsibility” for the production of network information in accordance with the law and fulfil the obligation of network security. It shall also bear the burden of responsibility under relevant Data Protection laws if the data concerned constitutes personal data. Interestingly Article 10 places an obligation on the provider to inter  alia  “prevent minors from over-reliance” in the services. Article 9 had also referred to a service agreement between provider and user so it appears an obligation would fall on the provider pursuant to that agreement to ensure safe and responsible use of the service in the case of minors. In other words the provider must determine if the user is a minor and prevent “over-reliance or indulgence”. 

Article 14 provides that in circumstances where the provider finds illegal content, it shall “take disposal measures such as stopping generation, transmission and elimination, take model optimization training and other measures for rectification, and report to the relevant competent authorities.” Where a provider finds a user uses a relevant service to engage in illegal activities “he shall take measures such as warning, restricting functions, suspending or terminating the provision of services to them in accordance with the law, keep relevant records, and report to the relevant competent authorities”.

Chapter IV deals with the issue of supervision and inspection and legal liability. It requires the formulation of classification and hierarchical supervision rules or guidelines by the relevant national competent departments. It sets down a clear mandate in Article 17 for those providers that provide relevant services with “public opinion attributes or social mobilization capabilities”. Those providers are required to carry out security assessment, and comply with all State law. Users have the right to complain where State law has not been complied with by a provider (Article 18). Article 19 refers to a supervision and inspection of relevant services by the competent departments of the Government indicated that such supervision and inspection “shall” be carried out. 

Article 22 sets down definitions for key terms and includes the following:

(1) Generative artificial intelligence technology refers to models and related technologies with the ability to generate text, pictures, audio, video and other content.

(2) Generated artificial intelligence service providers refer to organizations and individuals who use generative artificial intelligence technology to provide generative artificial intelligence services (including generative artificial intelligence services by providing programmable interfaces, etc.).

(3) Generated artificial intelligence service users refer to organizations and individuals who use generative artificial intelligence services to generate content.

Conclusion

The various provisions which govern this space in China create a large tapestry or “castle” which sets down very specific obligations on providers of relevant services. Chief among these is the Interim Generative AI Regulations 2023. Article 4 is particularly noteworthy in that it seeks adherence to the core values of socialism and specifically prohibits use of the technology to attain the end of inciting subversion of state power, overthrowing the socialist system, and endangering national security and interest. These references both set down the imperative use-protections of Artificial Intelligence, but, also, by implication they accept that the technology unrestrained, is capable of those things. There have been recent emerging legislative proposals too.[1]


[1] Wang et al, “Artificial intelligence “Law(s)” in China (Part II): sectoral governance and emerging legislative proposals” Journal of AI Law and Regulation AIRe 2025, 2(2), 139-157.

The Article also refers to “damage the image of the country”, or to “incite the secession of the country”, or otherwise “undermine national unity and social stability”, as well as the promotion of terrorism, extremism, national hatred, ethnic discrimination, violence, obscene pornography, false and harmful information and other content prohibited by laws and administrative regulations. That article also specifies that in the process of design of the service and its training it shall take effective measures to protect ethnicity, belief, country, region, gender, age, occupation, health and other discrimination. It shall also respect intellectual property rights and business ethics, keep business secrets, and not to cause a monopoly or unfair competition. It shall respect the legitimate rights and interests of others, and shall not endanger the physical and mental health of others, and shall not infringe upon the rights and interests of others’ portrait, reputation, honour, privacy and personal information. The article also set down an obligation to take effective measures to improve transparency of generated artificial intelligence services and improve “accuracy and reliability” of the generated content. 

One conference attendee asks for “refinement” of the regulations:

“Overall, if Chinese leaders want to develop generative AI in an orderly, rapid and vigorous manner in China, it is necessary to refine the specific rules of regulatory measures related to generative AI. In the process of detailed rules, we can find many hidden, controversial legal issues and form a more comprehensive legal regulation advice in the collision of thought. At the same time, we can refer to foreign legislation, regulatory ideas and measures, helping Chinese regulators with the development of generative artificial intelligence constantly adjusting and modifying regulations, so as to better promote the development of Chinese native generative artificial intelligence related industries.”[14]

As regards future direction another author says this:

“At the current stage, China should adhere to a balanced approach that emphasizes both security and development in the governance of generative artificial intelligence. Based on the principle of placing people at the centre, we should promote the establishment of an artificial intelligence ethics code and promote the development of a systematic legal regulatory system that is founded on general generative artificial intelligence legislation and supplemented by specific management measures.” [15]

Finally one source considers it would be a mistake to dismiss the importance of the Chinese regulations altogether. He highlights first the concerns:

“But international discourse on Chinese AI governance often fails to take these regulations seriously, to engage with either their content or the policymaking process. International commentary often falls into one of two traps: dismissing China’s regulations as irrelevant or using them as a political prop. Analysts and policymakers in other countries often treat them as meaningless pieces of paper. President Xi Jinping and the Chinese Communist Party (CCP) have unchecked power to disregard their own rules, the argument goes, and therefore the regulations are unimportant.”[16]

He then states as follows:The specific requirements and restrictions [the Regulations] impose on China’s AI products matter. They will reshape how the technology is built and deployed in the country, and their effects will not stop at its borders. They will ripple out internationally as the default settings for Chinese technology exports. They will influence everything from the content controls on language models in Indonesia to the safety features of autonomous vehicles in Europe. China is the largest producer of AI research in the world, and its regulations will drive new research as companies seek out techniques to meet regulatory demands.”[17]


[1] https://www.reuters.com/world/us/senate-leader-schumer-pushes-ai-regulatory-regime-after-china-action-2023-04-

[2] https://digichina.stanford.edu/work/experts-examine-chinas-pioneering-draft-algorithm-regulations/

[3] “The third step is that by 2030, the theory, technology and application of artificial intelligence will generally reach the world’s leading level, becoming the world’s major artificial intelligence innovation centre, and the intelligent economy and intelligent society have achieved remarkable results, laying an important foundation for becoming an innovative country and a powerful country.” https://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm

[4] Governance Principles for New Generation AI: Develop Responsible Artificial Intelligence https://digichina.stanford.edu/work/translation-chinese-expert-group-offers-governance-principles-for-responsible-ai/

[5] https://digichina.stanford.edu/work/translation-guiding-opinions-on-strengthening-overall-governance-of-internet-information-service-algorithms/

[6] https://www.chinalawtranslate.com/en/deep-synthesis/

[7] The Personal Information Protection Law (2021) also impact on Artificial Intelligence https://digichina.stanford.edu/work/translation-personal-information-protection-law-of-the-peoples-republic-of-china-effective-nov-1-2021/

[8] https://www.reuters.com/world/us/senate-leader-schumer-pushes-ai-regulatory-regime-after-china-action-2023-04-13/#:~:text=Schumer%20cited%20China’s%20release%20this,the%20rules%20of%20the%20road.%22

[9] https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117

[10] https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117

[11] https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117

[12] https://digichina.stanford.edu/work/experts-examine-chinas-pioneering-draft-algorithm-regulations/

[13] https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm

[14] Luodongni Yang, Research on the legal regulation of Generative Artificial intelligence—— Take ChatGPT as an example https://www.shs-conferences.org/articles/shsconf/abs/2023/27/shsconf_icprss2023_02017/shsconf_icprss2023_02017.html

[15] Yuzhuo Shi, Study on security risks and legal regulations of generative artificial intelligence. Science of Law Journal (2023) Vol. 2: 17-23. DOI: http://dx.doi.org/DOI: 10.23977/law.2023.021104.

[16] Sheehan, Matt. “China’s AI Regulations and How They Get Made.” Horizons: Journal of International Relations and Sustainable Development, no. 24, 2023, pp. 108–25. JSTOR, https://www.jstor.org/stable/48761167. Accessed 2 June 2024 at 108.

[17] Ibid at 109.

Chapter 12

The Proposed Position in Canada

Canada is a described as a “world leader in the field of artificial intelligence” where it is home to 20 public AI research labs, 75 AI incubators and accelerators, 60 groups of AI investors and over 850 AI related start-up businesses.

“Canadians have also played key roles in the development of AI technology since the 1970s. Canada was the first country in the world to create a national strategy for AI, releasing it in 2017, and is a co-founding member of the Global Partnership on AI (GPAI). The federal government has allocated a total of $568 million CAD to advance research and innovation in the AI field, develop a skilled talent pool, as well as to develop and adopt industry standards for AI systems as part of the national strategy for AI. These investments have been instrumental in the development of the Pan-Canadian AI Strategy to position Canada as a leading global player in AI research and commercialization.[1]

The Canadian legislature has proposed provisions on the issue of Artificial Intelligence as part of Bill C-27 broadly called the Digital Charter Implementation Act, 2022 where the relevant Part in that Act (Part 3) is described as the Artificial Intelligence and Data Act (AIDA). The Canadian government describes the provisions as:

“[T]he first step towards a new regulatory system designed to guide AI innovation in a positive direction, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses. The Government intends to build on this framework through an open and transparent regulatory development process. Consultations would be organized to gather input from a variety of stakeholders across Canada to ensure that the regulations achieve outcomes aligned with Canadian values.

The global interconnectedness of the digital economy requires that the regulation of AI systems in the marketplace be coordinated internationally. Canada has drawn from and will work together with international partners – such as the European Union (EU), the United Kingdom, and the United States (US) – to align approaches, in order to ensure that Canadians are protected globally and that Canadian firms can be recognized internationally as meeting robust standards.”[2]

Artificial Intelligence and Data Act (AIDA)[3]
Section 2 sets out definitions:

artificial intelligence system means a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions. (système d’intelligence artificielle)

Section 3 disapplies application of the Bill in respect to a government institution nor does it apply with respect to a product, service or activity that is under the direction or control of the Minister of National Defence; the Director of the Canadian Security Intelligence Service; the Chief of the Communications Security Establishment; or any other person who is responsible for a federal or provincial department or agency and who is prescribed by regulation.
Regulation of Artificial Intelligence Systems in the Private Sector is the title of Part 1 of the Bill and includes a definition (Section 5) for harm under the act where harm means

Sections 6 to 12 are important and form the main body of the legislation. There are some similarities with the position in the European Union as regards what the Canadian legislature refers to as a “high-impact system” – referred to as High Risk in the EU – the requirement for an assessment of such a high impact system; monitoring of mitigation measures for such systems and the keeping of general records. In the EU this is referred to as the requirement to keep documentation and automated logs. A breach of any of these sections, section 6 to section 12, is an offence pursuant to Section 30 whereby section 30(3) sets down the following (stringent) penalties:

(3) A person who commits an offence under subsection (1) or (2)

The relevant sections, section 6 to section 12, are as follows:

Section 6 deals with anonymized data and states that 

“A person who carries out any regulated activity and who processes or makes available for use anonymized data in the course of that activity must, in accordance with the regulations, establish measures with respect to

Section 7 concerns the assessment of a high-impact system and states that: 

“A person who is responsible for an artificial intelligence system must, in accordance with the regulations, assess whether it is a high-impact system.”

Measures related to risks is the title of section 8 and this states that 

“A person who is responsible for a high-impact system must, in accordance with the regulations, establish measures to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system.”

Section 9 deals with monitoring of mitigation measures:

A person who is responsible for a high-impact system must, in accordance with the regulations, establish measures to monitor compliance with the mitigation measures they are required to establish under section 8and the effectiveness of those mitigation measures.”

An obligation to keep general records is set out in section 10:

10 (1) A person who carries out any regulated activity must, in accordance with the regulations, keep records describing in general terms, as the case may be,

Additional records

(2) The person must, in accordance with the regulations, keep any other records in respect of the requirements under sections 6 to 9 that apply to them.

Section 11 refers to a plain language description of how the system is intended to be used: 

Publication of description — making system available for use

11 (1) A person who makes available for use a high-impact system must, in the time and manner that may be prescribed by regulation, publish on a publicly available website a plain-language description of the system that includes an explanation of

Publication of description — managing operation of system

(2) A person who manages the operation of a high-impact system must, in the time and manner that may be prescribed by regulation, publish on a publicly available website a plain-language description of the system that includes an explanation of

Section 12 places an obligation on the person responsible for a high-impact system to notify the Minister if the use of the system results or is likely to result in material harm. 

12 A person who is responsible for a high-impact system must, in accordance with the regulations and as soon as feasible, notify the Minister if the use of the system results or is likely to result in material harm.

Article 33 permits the Minister to designate a senior official of the department over which the Minister presides to be called the Artificial Intelligence and Data Commissioner.

In Part 2 titled “General Offences Related to Artificial Intelligence Systems” there are two offences set out in the Bill:

Section 38 creates an offence for possession of use of personal information:

38 Every person commits an offence if, for the purpose of designing, developing, using or making available for use an artificial intelligence system, the person possesses — within the meaning of subsection 4(3) of the Criminal Code — or uses personal information, knowing or believing that the information is obtained or derived, directly or indirectly, as a result of

Section 39 refers to the making of a system of Artificial Intelligence available for the use of a person and the use of that system causes serious physical or psychological harm:

39 Every person commits an offence if the person

Punishment is dealt with in Article 40. Any offence under section 38 or section 39 resulting in a conviction on indictment may result in a fine of not more than the greater of $25,000,000 and 5% of the person’s gross global revenues in its financial year before the one in which the person is sentenced, in the case of a person who is not an individual, or, in the case of an individual, to a fine in the discretion of the court or to a term of imprisonment of up to five years less a day. In the case of summary conviction the fine can be not more than the greater of $20,000,000 and 4% of the person’s gross global revenues in its financial year before the one in which the person is sentenced, in the case of a person who is not an individual, and to a fine of not more than $100,000 or to a term of imprisonment of up to two years less a day, or to both, in the case of an individual.

Criticism

Brown[4] criticises the proposed Act. The exclusion of government entities for instance is criticised on the basis that “Government entities potentially use AI in significant and potentially harmful ways, and therefore ought to be covered under the AIDA to minimise potential harms.”

Furthermore the author considers that the definition of an AI system in the Bill is “rooted in specific technologies” and is not future-proof. The technologies listed in the definition are considered as “highly abstract concepts and therefore subject to interpretation.” The author notes the concerns in the literature that the proposed definition be abandoned in favour of one that is neutral and future-proof. 

The author also looks at the proposed concept of “high-impact” systems and notes that the proposal is reminiscent of the EU AI Act which divides systems into categories depending on usage categories. He considers that a high-impact system depends on regulatory discretion. He considers that focusing on a distinction between high and low impact systems avoids applying a proportionate degree of care for a particular type of system. Such a shift would ensure that all systems undergo a thorough risk assessment “and would give regulators flexibility to audit and enforce AIDA in cases where “low-impact” systems end up having a significant impact on individual rights. 

He also criticises the measures in section 8 on bias considering that identifying bias in an AI system is an especially difficult task owing to technical impossibility, collecting of demographic data undermines privacy and discrimination safeguards, and coded biases are difficult to measure, and are pervasive. 

“Because of these factors, auditing an AI system post-hoc to determine if it is biased is an extremely difficult task (especially for external auditors), forcing regulators to rely on ex-ante monitoring and mitigation measures. While the AIDA requires that entities retain “general records” describing the bias monitoring schemes in “general terms,” this is likely insufficient for a regulator to determine if a system has been adequately monitored. Legislators should therefore bolster the record-keeping requirements to improve auditability, requiring entities to retain the specific codes and procedures used to perform monitoring (including versions thereof), and a record of each attempted monitoring test, and the result.”

On the issue of harm the author notes that the AIDA establishes liability for entities who knowingly or recklessly cause physical, psychological, or economic harm by making an AI system available for use. (Section 39). He considers that this liability is imposed without weighing the benefits of an AI system against these harms.

“The absolute imposition of liability for any harm caused may deter the development of systems that operate in critical environments. For example, AI systems have the potential to significantly improve patient outcomes in the medical field. AI systems may provide faster, more accurate diagnoses, and suggest more effective treatment plans than doctors alone. However, medical AI systems may occasionally cause harm through misdiagnosis or mistreatment, though at a potentially similar or lower rate than human doctors. The AIDA ought to encourage these developments, by clarifying that harm should be considered in the context of potential benefits, as well as the oversight humans have in applying predictions and decisions made by AI.”

Finally, the author notes that the AIDA doesn’t specifically address copyright concerns.

Canadian lawyer reported that 19 organisations signed an open letter to the relevant Minister asking for AIDA to be removed from the proposed Bill before the legislature stating “the bill is not adequate for committee consideration.”[5]

“AIDA, as it stands, is an inadequate piece of legislation. [Innovation, Science and Economic Development Canada] should not be the primary or sole drafter of a bill with broad human rights, labour, and cultural impacts. The lack of any public consultation process has resulted in proposed legislation that fails to protect the rights and freedoms of people across Canada from the risks that come with burgeoning developments in AI.”[6]

“I think that an industry-first approach is not taking into consideration the social impacts of this technology in the way that we really ought to be doing with our laws. Human rights, equality, equity and privacy are not at the forefront of this proposal. The proposal is really focused on identifying just particular kinds of risks.”[7]

The lack of application of the Bill to Government was also cited as an issue. Kristen Thomasen, an assistant professor at the University of British Columbia’s Peter A. Allard School of Law says: “a lot of the harmful uses of AI that we’re seeing have come from government uses.”[8]

It was reported that a Canadian privacy lawyer told parliament the proposed legislation was “fundamentally flawed” in that it fails to protect the public from significant risks and harms, and hinders innovation.[9]  

In an article for the Canadian Bar Review Scassa explains[10] that the brevity of the Bill is in large part due to a substantial number of important elements being left to regulations. The core-focus of the legislation is on high-impact AI systems although the crucial term “high-impact” is not defined; nor does it create an independent agency to oversee the regulatory regime it seeks to introduce. The author notes that the Bill is conceived as “agile” and the preamble to Bill C-27 of which the AIDA is a part refers to an agile regulatory framework to facilitate both compliance and innovation.[11]

“Challenging the AIDA does not mean that there cannot and should not be AI regulation in Canada; but such regulation, when it comes, must be the product of much greater consultation and collaboration, and must be rationally integrated with existing and emerging frameworks.”[12]

Conclusion

The Canadian AIDA is Canada’s attempt at addressing the issue of regulation of Artificial Intelligence. The measures are succinct, forming one part in a larger Bill, and cover certain items covered by the EU in respect of a type of risk classification regime, and a requirement to keep records,. For a relatively short Bill it proposes to enact quite a number of offences: a breach of section 6 to section 12 is an offence under section 30 and there are standalone offences too in sections 38 and 39.

Overall we might consider the Canadian approach as a half-way house between the detailed provisions in the European Union (accompanied by offences) and the measures taken in the United States of America which have been described as “lacking teeth”. A lawyer might have preferred to see more definitions in AIDA to accompany the various offences and a more in-depth approach to capturing the essential issues with the area under examination.


[1] https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document

[2] https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document

[3] https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading

[4] Brown, Derek, Canada’s Proposed Artificial Intelligence and Data Act (AIDA): A Critical Review (July 24, 2023). Available at SSRN: https://ssrn.com/abstract=4687995 or http://dx.doi.org/10.2139/ssrn.4687995

[5] https://www.canadianlawyermag.com/practice-areas/privacy-and-data/critics-say-artificial-intelligence-and-data-act-needs-to-focus-more-on-rights-not-just-business/380552

[6] Ibid.

[7] Ibid

[8] Ibid.

[9] https://www.itworldcanada.com/article/proposed-canadian-ai-law-fundamentally-flawed-parliament-told/554225

[10] https://cbr.cba.org/index.php/cbr/article/view/4817

[11] Ibid.

[12] Ibid.

Chapter 13

UK approach and does AI need an International Framework?

Introduction

This chapter will consider international cooperation as one possible future route for continued AI collaboration.[1] It will begin with the International Summit on AI Safety hosted by the United Kingdom in Bletchley Park. It will look at the commitment of attendees to foster greater international collaboration on the subject. It will also consider developments in the United Nations and the Global Partnership on AI. One source considers that a single “tidy” regime for global Artificial Intelligence governance is unlikely:

“Rather than a single, tidy, institutional solution to govern AI, the world will likely see the emergence of something less elegant: a regime complex, comprising multiple institutions within and across several functional areas. The messy structure of global AI governance will reflect the distinct functional imperatives of AI regulation, the diversity and incentives of relevant public and private actors, and the absence of a single international political authority with the capacity and legitimacy to orchestrate cooperation across multiple domains”.[2]

Bletchley Park Summit

On 1st and 2nd November 2023, 28 countries and the European Union[3] attended an international summit on AI safety at the WWII home of Alan Turing – Bletchley Park.[4] The Summit recognised that AI “presents enormous global opportunities” with the “potential to transform and enhance human wellbeing, peace and prosperity.” In order to see this achieved those in attendance affirmed that “AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.”

The Summit attendees welcomed the international community’s efforts so far co cooperation on AI “to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential. “

“AI systems are already deployed across many domains of daily life including housing, employment, transport, education, health, accessibility, and justice, and their use is likely to increase. We recognise that this is therefore a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally. This includes for public services such as health and education, food security, in science, clean energy, biodiversity, and climate, to realise the enjoyment of human rights, and to strengthen efforts towards the achievement of the United Nations Sustainable Development Goals.”

The attendees also referred to the risks associated with AI, saying that those include those linked the “domains of daily life”. The Summit attendees welcomed international efforts to examine and address the potential impact of AI systems. It also recognised the protection of human rights, transparency and ‘explainability’, fairness, accountability, regulation, safety, appropriate human oversight ethics, bias mitigation, privacy and data protection – as areas that required to be addressed.  

“We also note the potential for unforeseen risks stemming from the capability to manipulate content or generate deceptive content. All of these issues are critically important and we affirm the necessity and urgency of addressing them.” 

Frontier AI models were specifically addressed and safety risks associated with those models was noted:

“Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks – as well as relevant specific narrow AI that could exhibit capabilities that cause harm – which match or exceed the capabilities present in today’s most advanced models. Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict. We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models. Given the rapid and uncertain rate of change of AI, and in the context of the acceleration of investment in technology, we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent.”

The Summit recognised that many of the risks associated with AI are international in nature, and, critically says: “so are best addressed through international cooperation”. 

We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI. In doing so, we recognise that countries should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI. This could include making, where appropriate, classifications and categorisations of risk based on national circumstances and applicable legal frameworks. We also note the relevance of cooperation, where appropriate, on approaches such as common principles and codes of conduct. With regard to the specific risks most likely found in relation to frontier AI, we resolve to intensify and sustain our cooperation, and broaden it with further countries, to identify, understand and as appropriate act, through existing international fora and other relevant initiatives, including future international AI Safety Summits.

The attendees also referred to the important role played by those outside the auspices of the State saying that “[a]ll actors have a role to play in ensuring the safety of AI” including nations, international fora and other initiatives, companies, civil society and academia.

The concept of inclusive AI was mentioned “bridging the digital divide”: 

“[w]e reaffirm that international collaboration should endeavour to engage and involve a broad range of partners as appropriate, and welcome development-orientated approaches and policies that could help developing countries strengthen AI capacity building and leverage the enabling role of AI to support sustainable growth and address the development gap. We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures. We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks.”

The Summit also committed to “sustain an inclusive global dialogue” on the issue which will engage international fora and other relevant initiatives and which will contribute “in an open manner to broader international discussions, and to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all.” The Summit committed to meeting again in 2024 where the location for the full summit will take place in France and with a mini virtual Summit hosted in South Korea.   

In advance of the Summit some representatives of China that attended stated that nothing short of a robust international regulatory regime would suffice citing an “existential risk to humanity.”[5] The Summit is clear in its endorsement of international cooperation and the concepts mentioned for particular protection – those of protection of human rights, transparency and ‘explainability’, fairness, accountability, regulation, safety, appropriate human oversight ethics, bias mitigation, privacy and data protection – have an international context: they are very similar to those provisions in draft legislation in Brazil, for instance,[6] and they align with the principles of the Global Partnership on Artificial Intelligence.[7]  

The White House in its Executive Order also made mention of international co-operation when it stated: 

“As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI. The Administration has already consulted widely on AI governance frameworks over the past several months—engaging with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. The actions taken today support and complement Japan’s leadership of the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations.”[8]

AI Seoul Summit

The successor to the Bletchley Park Summit was held in Seoul, South Korea in May 2024. This was co-hosted by South Korea and the United Kingdom and was coined as a “mini-summit” over two days. The aim of the event was to continue the momentum generated at Bletchley. 

“The summit reinforced international commitment to safe AI development and added “innovation” and “inclusivity” to the agenda of the AI summit series. In his speech opening the summit, South Korean president Yoon Suk Yeol said, “the AI Seoul Summit, which will expand the scope of discussion to innovation and inclusivity . . . will offer an opportunity to consolidate our efforts and promote AI standards and governance at the global level.””[9]

One source stated:

“However, failing to duplicate the global sensation of the first AI summit does not mean that the Seoul summit was not important. In fact, there were at least two substantive outcomes: first, the number of AI safety institutes among advanced democratic countries continues to grow, meaning that global government capacity on AI safety will soon increase dramatically. In addition to the original U.S. and UK institutes, Japan, South Korea, and Canada have now announced that they will establish their own AI safety institutes. For its part, the European Union has suggested that the European Commission AI Office, which was established as part of the EU AI Act, will serve the function of an AI safety institute for the European Union. At the AI Seoul Summit, the Korean and UK organizers secured a statement of intent signed by 10 countries plus the European Union for these institutes to cooperate as a network. If the UK AI Safety Summit’s achievement was establishing the idea of an AI safety institute, the Seoul AI Summit marks the moment that the idea reached significant international scale as a cooperative effort.”[10]

AI Paris Summit

The next in the series was the summit in Paris in February 2025. This Summit was notable for the failure of both the United Kingdom and the United States of America to sign the summit declaration. China was among the signatories along with France and India.[1] The Summit began with a contribution from Mathias Cormann, secretary general of the Organisation for Economic Co-operation and Development (OECD) saying there was a “desperate need” for greater international co-operation around AI technology. A contribution from United States of America Vice-President Mr. J.D. Vance put forward an America-first approach to development – a move criticised by The Economist.[2] Ultimately the summit declaration sought an open, inclusive, and ethical approach to advancement of the technology.[3] The UK indicated it would not sign on the basis of global governance concerns.[4]


[1] https://www.bbc.com/news/articles/c8edn0n58gwo

[2] https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.economist.com/leaders/2025/02/12/after-deepseek-america-and-the-eu-are-getting-ai-wrong&ved=2ahUKEwiAhqzRjcGLAxUSVkEAHT7oBZMQFnoECBUQAQ&usg=AOvVaw37q88qKm44CDWPXkN2N2XP

[3] https://www.bbc.com/news/articles/c8edn0n58gwo

[4] Ibid.

Artificial Intelligence (Regulation) Bill 2024 (UK)

In 2022 the United Kingdom released a policy paper on a pro-innovation approach to regulating AI.[11] It then issued a white paper on Artificial Intelligence in 2023.[12] Its approach, referencing the 2022 policy paper, speaks to a “pro-innovation framework” seeking to put in place “a new framework to bring clarity and coherence to the AI regulatory landscape … designed to make responsible innovation easier.”[13] It seeks to strengthen the UK’s position as a global leader in AI and harness AI’s ability to drive growth and prosperity and increase public trust in its use and application. The framework is underpinned by 5 principles to guide and inform the responsible development and use of AI in all sectors of the economy:

These would not be put on a statutory footing initially as “new rigid and onerous legislative requirements on businesses could hold back AI innovation and reduce our ability to respond quickly and in a proportionate way to future technological advances.” Instead a non-statutory approach is preferred and would be implemented by existing regulators thus relying on regulators’ domain-specific expertise to “tailor the implementation of the principles to the specific context in which AI is used.”[14]

An Artificial Intelligence (Regulation) Bill 2024 was introduced[15] though readers should be cautious: this Bill was introduced as a private members Bill in the House of Lords and it is possible it will be usurped by a Government legislative initiative in future before its enactment.  This Bill lapsed upon the cessation of Parliament in advance of a General Election but was subsequently reintroduced in 2025.[1]


[1] https://kennedyslaw.com/en/thought-leadership/article/2025/the-artificial-intelligence-regulation-bill-closing-the-uks-ai-regulation-gap/

The Bill, as currently presented, proposes the creation of a body called the AI Authority.[16] This authority would inter alia ensure that relevant regulators take account of Artificial Intelligence. That authority must have regard to the 5 principles detailed in the 2023 white paper and ensure that regulation of Artificial Intelligence adheres to these. Furthermore it’s specifically set down that AI and its applications should comply with equalities legislation, be inclusive by design, be designed so as neither to discriminate unlawfully among individuals nor, so far as reasonably practicable, to perpetrate unlawful discrimination arising in input data.[17]   Section 3 of the Bill provides for Regulatory Sandboxes and Section 4 deals with AI responsible officers stating that any business which develops, deploys or uses AI must have a designated AI officer with corresponding duties set out in the section. Transparency, IP obligations and labelling are dealt with in Section 5 and include a provision that informed consent may be designated by Regulations as either opt-in or out-out with different provisions potentially applying to different cases. Public engagement is set out in section 6 of the Bill. 

One source considers they offer a more pro-innovative stance than the equivalent provisions in the European Union:[18]

“While the EU AI Act makes an encouraging effort to address the potential risks posed by AI systems, its provisions governing high-risk AI uses are inadequately framed. On the one hand, the classification method potentially leaves out use cases that could pose significant risks but do not fit into the current compartmentalized high-risk categories. The Commission’s delegated power to revise the list of high-risk AI uses would be inadequate to address the challenge, as this may take time, besides the possibility that the Commission itself may fail to consider specific use cases as high-risk. On the other hand, use cases that do not pose significant risks could be (mis)classified as high-risk due to the act’s failure to consider the specific context of uses. This renders the EU AI Act an inapt regulatory model. The EU AI Act’s compartmentalized risk classification method is partly a result of the EU’s adherence to product safety legislations setting EU-wide standards, with narrow aims of addressing health and safety and consumer protection. This led to a compartmentalized[19] thinking rather than a principled approach. The UK’s incremental approach to AI regulation provides a better and more pragmatic regulatory approach for AI, if appropriately fined-tuned. With proper principle-driven risk classification system and a strong commitment to coordinate sectoral legislations and enforcement, the UK could implement a AI regulatory framework that better balances the need to encourage innovation with the prevention and migrations of potential risk presented by AI.”[20]

On his election to the office of Prime Minister Sir Keir Starmer set out his legislative roadmap which originally was reported as including a new Artificial Intelligence Bill –  designed to “enhance the legal safeguards surrounding the most cutting edge AI technologies”.[21] This Bill however was missing from the King’s speech announced in July 2024 which set the legislative agenda for the new parliamentary session: a move that surprised commentators[22] before it was referenced again in August – and is anticipated, as a Bill, before the end of 2024.[23]

OECD Council on Artificial Intelligence

The Council of the OECD on Artificial Intelligence recognised that AI has “pervasive, far-reaching and global implications that are transforming societies, economic sectors and the world of work, and are likely to increasingly do so in the future”. It also recognised that “AI has the potential to improve the welfare and well-being of people, to contribute to positive sustainable global economic activity, to increase innovation and productivity, and to help respond to key global challenges”.[24] It sets down principles for responsible stewardship of trustworthy AI to include: inclusive growth, sustainable development and well-being, human-centred values and fairness, transparency and ‘explainability’, robustness, security and safety, and accountability.

The Council also seeks international co-operation for trustworthy AI. 

  1. Governments, including developing countries and with stakeholders, should actively co-operate to advance these principles and to progress on responsible stewardship of trustworthy AI.

The OECD also set down the following influential definition of an AI system:

“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”[25]

This was substantively adopted by the European Union in its final version of the EU AI Act where it defines an AI system in the following terms:

“AI system” is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Global Partnership on AI[26]

The Global Partnership on Artificial Intelligence (“GPAI”) is a multi-stakeholder initiative aiming to “bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities. The partnership is built around a shared commitment to the OECD recommendation on Artificial Intelligence[27]mentioned above and brings together engaged minds and expertise from science, industry, civil society, government and international organisations with a view to fostering international cooperation. 

GPAI was launched in 2020 and in its first few years, GPAI experts seek to collaborate across four working groups on the themes of responsible AI (including a subgroup on AI and pandemic response), data governance, the future of work, and innovation and commercialization. 

It sets down principles for responsible stewardship of trustworthy AI: 

Inclusive growth, sustainable development and well-being

Human-centred values and fairness

Transparency and ‘explainability’

Robustness, security and safety

Accountability

Particularly with regard to regulation the GPAI sets out its direction for international cooperation for trustworthy AI achieved by investing in AI research and development, fostering a digital ecosystem for AI, shaping an enabling policy environment for AI, building human capacity and preparing for labour market transformation, and international cooperation for trustworthy AI.

The Working Group on Responsible AI (RAI) is “grounded in a vision of AI that is human-centred, fair, equitable, inclusive and respectful of human rights and democracy, and that aims at contributing positively to the public good”.[28]

The United Nations

In March 2024 the General Assembly of the United Nations adopted a landmark resolution on the promotion of “safe, secure and trustworthy” artificial intelligence systems that will also benefit development for all.[29] The draft resolution was led by the United States of America and was adopted without a vote. The text was backed by more than 120 other Member States. The General Assembly recognised the capability of AI system to “accelerate and enable progress towards reaching the 17 Sustainable Development goals.[30] Encouraging, the General Assembly called on Member States to “refrain from or cease the use of Artificial Intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights.”[31]

The resolution also recognised the emerging gap between different nations in its AI development and cautioned that developing nations face unique challenges in keeping up with the rapid pace of innovation. Co-operation with, and support of, developing countries was urged “so they can benefit from inclusive and equitable access, close the digital divide and increase digital literacy.”

The UN resolution links in with the stated aim of the UN for a global digital compact – first mentioned in the Secretary General’s Our Common Agenda in September 2021[32] and which specifically seeks to “promote regulation of artificial intelligence”.[33]

The preamble to the Resolution[34] refers back to previous Resolutions including its Resolution of 25th July 2023 on the impact of rapid technological change on the achievement of Sustainable Developments Goals[35] and on the promotion and protection of human rights in the context of digital technologies.[36] It refers to the work of the International Telecommunications Union in convening the Artificial Intelligence for Good platform.[37] The preamble recognises that “safe, secure and trustworthy artificial intelligence systems” are such that they are “human-centric, reliable, explainable, ethical, inclusive, in full respect, promotion and protection of human rights and international law, privacy preserving, sustainable development oriented, and responsible.” Military application of AI is specifically excluded from the resolution. 

Paragraph 1 resolves to bridge the AI and other digital divides between and within countries. Paragraph 2 resolves to promote the “safe, secure and trustworthy AI already mentioned above. Paragraph 3 encourages multi-disciplinary cooperation on the issue. Paragraph 4 calls on Members States and stakeholder to take action to cooperate and provide assistance to developing countries. Paragraph 5 emphasises that human rights and fundamental freedoms must be respected, protected and promoted throughout the life cycle of AI systems. Paragraph 6 encourages Member States to promote safe, secure and trustworthy AI systems consistent with their own national priorities and circumstances.

Paragraph 7 recognises that data is fundamental to the development and operation of AI systems and emphasises “fair, inclusive, responsible and effective data governance.” Paragraph 8 speaks to the goal of further international co-operation and states the importance of continuing the discussion on developments in the area of AI governance to that international approaches keep pace with the evolution of AI systems and their uses.

Paragraph 9 encourages the private sector to adhere to applicable international and domestic laws and act in line with the United Nations Guiding Principles of Business and Human Rights: Implementing the United Nations “Protect, Respect, Remedy” Framework[38] Paragraph 10 calls upon specialised agencies, funds, programmes, other entities, bodies and offices within their respective mandates and resources to “continue to assess and enhance their response to leverage opportunities and address the challenges posed by AI systems in a collaborative, coordinated and inclusive manner.” Finally the Resolution makes reference to its Summit of the Future.[39]

Hiroshima AI Process

The Hiroshima AI process is an initiative of the G7. Launched in May 2023 under the presidency of Japan the process seeks to promote safe, secure and trustworthy AI.[40]

“Recognizing the need to build up and promote inclusive global governance on AI in order to maximize its innovative opportunities while mitigating the risks and challenges from advanced AI systems, work began on establishing international rules to serve as the foundation of such governance. And in December of last year, agreement was reached on the world’s first international framework, known as the Hiroshima AI Process Comprehensive Policy Framework.”[41]

The framework includes two main elements: a Hiroshima Process International Guiding Principles for All AI Actors; and the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems. The International Guiding Principles are described as the set of principles that should be applied to all actors across the AI lifecycle. In addition to principles mainly geared toward AI developers, such as publicly reporting advanced AI systems’ capabilities and domains of inappropriate use, and protecting intellectual property, they also include those that call on users to improve digital literacy dealing with such risks as disinformation.[42]

The Guiding Principles[43] calls on organisations to abide by the following principles:

Including privacy policies, and mitigation measures, in particular for organisations developing advanced AI systems

The International Code of Conduct, lists actions that AI developers must abide by. The International Code of Conduct contains some examples of risks requiring the attention of developers and contains provisions on the development and deployment of technology enabling users to identify AI-generated content. The framework of instruments was achieved within 7 months of the launch of the Process.[44]

Hiroshima Process International Code of Conduct for Organizations

Developing Advanced AI Systems

On the basis of the International Guiding Principles for Organizations Developing Advanced AI systems, the International Code of Conduct for Organizations Developing Advanced AI Systems aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems (henceforth “advanced AI systems”).

Organizations should follow these actions in line with a risk-based approach. Organizations that may endorse this Code of Conduct may include, among others, entities from academia, civil society, the private sector, and/or the public sector.This non-exhaustive list of actions is discussed and elaborated as a living document to build

on the existing OECD AI Principles in response to the recent developments in advanced AI systems and is meant to help seize the benefits and address the risks and challenges

brought by these technologies. Organizations should apply these actions to all stages of the lifecycle to cover, when and as applicable, the design, development, deployment and use of advanced AI systems.

This document will be reviewed and updated as necessary, including through ongoing

inclusive multistakeholder consultations, in order to ensure it remains fit for purpose and

responsive to this rapidly evolving technology. Different jurisdictions may take their own unique approaches to implementing these actions in different ways.

We call on organizations in consultation with other relevant stakeholders to follow these

actions, in line with a risk-based approach, while governments develop more enduring

and/or detailed governance and regulatory approaches. We also commit to develop

proposals, in consultation with the OECD, GPAI and other stakeholders, to introduce

monitoring tools and mechanisms to help organizations stay accountable for the implementation of these actions. We encourage organizations to support the development of effective monitoring mechanisms, which we may explore to develop, by contributing best practices.

In addition, we encourage organizations to set up internal AI governance structures and

policies, including self-assessment mechanisms, to facilitate a responsible and accountable approach to implementation of these actions and in AI development.

While harnessing the opportunities of innovation, organizations should respect the rule of law, human rights, due process, diversity, fairness and non-discrimination, democracy, and human-centricity, in the design, development and deployment of advanced AI systems.

Organizations should not develop or deploy advanced AI systems in ways that undermine

democratic values, are particularly harmful to individuals or communities, facilitate terrorism, promote criminal misuse, or pose substantial risks to safety, security and human rights, and are thus not acceptable.States must abide by their obligations under international human rights law to ensure that human rights are fully respected and protected, while private sector activities should be in line with international frameworks such as the United Nations Guiding Principles on

Business and Human Rights and the OECD Guidelines for Multinational Enterprises.

Specifically, we call on organizations to abide by the following actions, in a manner that is commensurate to the risks:

1 Take appropriate measures throughout the development of advanced AI systems,

including prior to and throughout their deployment and placement on the market, to

identify, evaluate, and mitigate risks across the AI lifecycle.

This includes employing diverse internal and independent external testing measures,

through a combination of methods for evaluations, such as red-teaming, and implementing appropriate mitigation to address identified risks and vulnerabilities. Testing and mitigation measures, should, for example, seek to ensure the trustworthiness, safety and security of systems throughout their entire lifecycle so that they do not pose unreasonable risks. In support of such testing, developers should seek to enable traceability, in relation to datasets, processes, and decisions made during system development. These measures should be documented and supported by regularly updated technical documentation.

This testing should take place in secure environments and be performed at several

checkpoints throughout the AI lifecycle in particular before deployment and placement on the market to identify risks and vulnerabilities, and to inform action to address the identified AI risks to security, safety and societal and other risks, whether accidental or intentional. In designing and implementing testing measures, organizations commit to devote attention to the following risks as appropriate:

Organizations making these commitments should also endeavour to advance research and investment on the security, safety, bias and disinformation, fairness, explainability and interpretability, and transparency of advanced AI systems and on increasing robustness and trustworthiness of advanced AI systems against misuse.

Identify and mitigate vulnerabilities, and, where appropriate, incidents and patterns of misuse, after deployment including placement on the market.

Organizations should use, as and when appropriate commensurate to the level of risk, AI systems as intended and monitor for vulnerabilities, incidents, emerging risks and misuse after deployment, and take appropriate action to address these. Organizations are encouraged to consider, for example, facilitating third-party and user discovery and reporting of issues and vulnerabilities after deployment such as through bounty systems, contests, or prizes to incentivize the responsible disclosure of weaknesses. Organizations are further encouraged to maintain appropriate documentation of reported incidents and to mitigate theidentified risks and vulnerabilities, in collaboration with other stakeholders. Mechanisms to report vulnerabilities, where appropriate, should be accessible to a diverse set of stakeholders.

Publicly report advanced AI systems’ capabilities, limitations and domains of

appropriate and inappropriate use, to support ensuring sufficient transparency,

thereby contributing to increase accountability.

This should include publishing transparency reports containing meaningful information for all new significant releases of advanced AI systems. These reports, instruction for use and relevant technical documentation, as appropriate as, should be kept up-to-date and should include, for example;

> Details of the evaluations conducted for potential safety, security, and societal risks, as well as risks to human rights,

> Capacities of a model/system and significant limitations in performance that have

implications for the domains of appropriate use,

> Discussion and assessment of the model’s or system’s effects and risks to safety and

society such as harmful bias, discrimination, threats to protection of privacy or personal data, and effects on fairness, and

> The results of red-teaming conducted to evaluate the model’s/system’s fitness for moving beyond the development stage.

Organizations should make the information in the transparency reports sufficiently clear and understandable to enable deployers and users as appropriate and relevant to interpret the model/system’s output and to enable users to use it appropriately; and that transparency reporting should be supported and informed by robust documentation processes such as technical documentation and instructions for use.

Work towards responsible information sharing and reporting of incidents among

organizations developing advanced AI systems including with industry, governments, civil society, and academia

This includes responsibly sharing information, as appropriate, including, but not limited to evaluation reports, information on security and safety risks, dangerous intended or

unintended capabilities, and attempts by AI actors to circumvent safeguards across the AI lifecycle.

Organizations should establish or join mechanisms to develop, advance, and adopt, where appropriate, shared standards, tools, mechanisms, and best practices for ensuring the safety, security, and trustworthiness of advanced AI systems.

This should also include ensuring appropriate and relevant documentation and transparency across the AI lifecycle in particular for advanced AI systems that cause significant risks to safety and society.

Organizations should collaborate with other organizations across the AI lifecycle to share and report relevant information to the public with a view to advancing safety, security and trustworthiness of advanced AI systems. Organizations should also collaborate and share the aforementioned information with relevant public authorities, as appropriate. Such reporting should safeguard intellectual property rights.

Develop, implement and disclose AI governance and risk management policies,

grounded in a risk-based approach – including privacy policies, and mitigation

measures.

Organizations should put in place appropriate organizational mechanisms to develop,

disclose and implement risk management and governance policies, including for example accountability and governance processes to identify, assess, prevent, and address risks, where feasible throughout the AI lifecycle.

This includes disclosing where appropriate privacy policies, including for personal data, user prompts and advanced AI system outputs. Organizations are expected to establish and disclose their AI governance policies and organizational mechanisms to implement these policies in accordance with a risk based approach. This should include accountability and governance processes to evaluate and mitigate risks, where feasible throughout the AI lifecycle.

The risk management policies should be developed in accordance with a risk based

approach and apply a risk management framework across the AI lifecycle as appropriate

and relevant, to address the range of risks associated with AI systems, and policies should also be regularly updated.

Organizations should establish policies, procedures, and training to ensure that staff are

familiar with their duties and the organization’s risk management practices

Invest in and implement robust security controls, including physical security,

cybersecurity and insider threat safeguards across the AI lifecycle.

These may include securing model weights and, algorithms, servers, and datasets, such as through operational security measures for information security and appropriate

cyber/physical access controls.

This also includes performing an assessment of cybersecurity risks and implementing

cybersecurity policies and adequate technical and institutional solutions to ensure that the cybersecurity of advanced AI systems is appropriate to the relevant circumstances and the risks involved. Organizations should also have in place measures to require storing and working with the model weights of advanced AI systems in an appropriately secure environment with limited access to reduce both the risk of unsanctioned release and the risk of unauthorized access. This includes a commitment to have in place a vulnerabilitymanagement process and to regularly review security measures to ensure they are maintained to a high standard and remain suitable to address risks.

This further includes establishing a robust insider threat detection program consistent with protections provided for their most valuable intellectual property and trade secrets, for example, by limiting access to proprietary and unreleased model weights.

Develop and deploy reliable content authentication and provenance mechanisms,

where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content

This includes, where appropriate and technically feasible, content authentication and7

provenance mechanisms for content created with an organization’s advanced AI system.

The provenance data should include an identifier of the service or model that created the

content, but need not include user information. Organizations should also endeavour to

develop tools or APIs to allow users to determine if particular content was created with their advanced AI system, such as via watermarks. Organizations should collaborate and invest in research, as appropriate, to advance the state of the field.

Organizations are further encouraged to implement other mechanisms such as labelling or disclaimers to enable users, where possible and appropriate, to know when they are

interacting with an AI system.

Prioritize research to mitigate societal, safety and security risks and prioritize

investment in effective mitigation measures.

This includes conducting, collaborating on and investing in research that supports the

advancement of AI safety, security, and trust, and addressing key risks, as well as investing in developing appropriate mitigation tools.

Organizations commit to conducting, collaborating on and investing in research that

supports the advancement of AI safety, security, trustworthiness and addressing key risks, such as prioritizing research on upholding democratic values, respecting human rights, protecting children and vulnerable groups, safeguarding intellectual property rights and privacy, and avoiding harmful bias, mis- and disinformation, and information manipulation.

Organizations also commit to invest in developing appropriate mitigation tools, and work to proactively manage the risks of advanced AI systems, including environmental and climate impacts, so that their benefits can be realized.

Organizations are encouraged to share research and best practices on risk mitigation.

Prioritize the development of advanced AI systems to address the world’s greatest

challenges, notably but not limited to the climate crisis, global health and education

These efforts are undertaken in support of progress on the United Nations Sustainable

Development Goals, and to encourage AI development for global benefit. Organizations should prioritize responsible stewardship of trustworthy and human-centric AI and also support digital literacy initiatives that promote the education and training of the8

public, including students and workers, to enable them to benefit from the use of advanced AI systems, and to help individuals and communities better understand the nature, capabilities, limitations, and impact of these technologies. Organizations should work with civil society and community groups to identify priority challenges and develop innovative solutions to address the world’s greatest challenges.

10 Advance the development of and, where appropriate, adoption of international

technical standards

Organizations are encouraged to contribute to the development and, where appropriate, use of international technical standards and best practices, including for watermarking, and working with Standards Development Organizations (SDOs), also when developing

organizations’ testing methodologies, content authentication and provenance mechanisms, cybersecurity policies, public reporting, and other measures. In particular, organizations also are encouraged to work to develop interoperable international technical standards and frameworks to help users distinguish content generated by AI from non-AI generated content.

11 Implement appropriate data input measures and protections for personal data and intellectual property

Organizations are encouraged to take appropriate measures to manage data quality,

including training data and data collection, to mitigate against harmful biases.

Appropriate measures could include transparency, privacy-preserving training techniques, and/or testing and fine-tuning to ensure that systems do not divulge confidential or sensitive data.

Organizations are encouraged to implement appropriate safeguards, to respect rights

related to privacy and intellectual property, including copyright-protected content. Organizations should also comply with applicable legal frameworks. 

On 22 January 2024, the responsibility for this Artificial Intelligence at the G7 “formally shifted to the Italian leadership, which is now called upon to take the conversation forward and channel the collaboratively effort of G7 leaders into effective implementation and interoperability of allied AI regulatory frameworks. Designing flexible regulatory packages, able to accommodate the rapid pace of technological innovation while ensuring trustworthiness, is a pressing challenge and a pivotal step.”[45] At its meeting in Italy in Summer 2024 the G7 discussed AI and invited a contribution on the underlining ethical issues in its use from contributors including Pope Francis, Supreme Pontiff of the Universal Church.[46]

UNESCO

In 2024 UNESCO launched its Global AI Ethics and Governance Observatory[47] and held a global forum on the ethics of Artificial Intelligence in 2024 in Kranj, Slovenia. These initiatives built on USECO’s Recommendation on the Ethics of Artificial Intelligence:[48] adopted in 193 countries in 2021. The Observatory seeks to operationalise key principles and values outlined in the 2021 recommendation through practical tools and methodologies. This includes a Readiness Assessment Methodology (RAM) which enables governments to evaluate their preparedness to implement AI ethically and responsibly. The Conservatory serves as a global hub[49] for sharing information and data on AI governance practices and trends. 

The Recommendation from 2021 states inter alia that “Member States are to place human rights at the centre of regulatory frameworks and legislation on the development and use of AI. The Recommendation is firmly grounded on the respect, protection and promotion of human rights and underlines the obligatory character of human rights law.”[50]

Recommendation on the ethics of artificial intelligence

Recognizing the profound and dynamic positive and negative impacts of artificial intelligence (AI) on societies, environment,  ecosystems  and  human  lives,  including  the  human  mind,  in  part  because  of  the  new  ways  in which its use influences human thinking, interaction and decision-making and affects education, human, social and natural sciences, culture, and communication and information (…)

Taking fully into account that   the   rapid   development   of   AI   technologies   challenges   their   ethical implementation and governance,  as well as the respect for  and  protection of cultural  diversity, and has the potential to disrupt local and regional ethical standards and values,

1. Adopts the present Recommendation on the Ethics of Artificial Intelligence on this twenty-third day of November 2021;

2. Recommends that Member States apply on a voluntary basis the provisions of this Recommendation by taking appropriate steps, including whatever legislative or other measures may be required, in conformity with the constitutional practice and governing structures of each State, to give effect within their jurisdictions to the principles and norms of the Recommendation in conformity with international law, including international human rights law;

3. Also recommends that  Member  States  engage  all  stakeholders,  including  business  enterprises,  to ensure  that  they  play  their  respective  roles  in  the  implementation  of  this  Recommendation;  and bring  the Recommendation to the attention of the authorities, bodies, research and academic organizations, institutions and  organizations  in  public,  private  and  civil  society  sectors  involved  in  AI  technologies,  so  that  the development and use of AI technologies are guided by both sound scientific research as well as ethical analysis and evaluation. (…)

VALUES

Respect, protection and promotion of human rights and fundamental freedoms and human dignity

13. The  inviolable  and  inherent  dignity  of  every  human  constitutes  the  foundation  for  the  universal, indivisible,  inalienable,  interdependent  and  interrelated  system  of  human  rights  and  fundamental  freedoms. Therefore, respect, protection and promotion of human dignity and rights as established by international law, including international human rights law, is essential throughout the life cycle of AI systems. Human dignity relates to the recognition of the intrinsic and equal worth of each individual human being, regardless of race, colour, descent, gender, age, language, religion, political opinion, national origin, ethnic origin, social origin, economic or social condition of birth, or disability and any other grounds.

14. No  human  being  or  human  community  should  be  harmed  or  subordinated,  whether  physically, economically,  socially,  politically,  culturally  or  mentally  during  any  phase  of  the  life  cycle  of  AI  systems. Throughout  the  life  cycle  of  AI  systems,  the  quality  of  life  of  human  beings  should  be enhanced,  while  the definition of “quality of life” should be left open to individuals or groups, as long as there is no violation or abuse of human rights and fundamental freedoms, or the dignity of humans in terms of this definition.

15.Persons may interact with AI systems throughout their life cycle and receive assistance from them, such as  care  for  vulnerable  people  or  people  in  vulnerable  situations,  including  but  not  limited  to  children,  older persons, persons with disabilities or the ill. Within such interactions, persons should never be objectified, nor should their dignity be otherwise undermined, or human rights and fundamental freedoms violated or abused.

16.Human rights and fundamental freedoms must be respected, protected and promoted throughout the life  cycle  of  AI  systems.  Governments,  private  sector,  civil  society,  international  organizations,  technical communities and academia must respect human rights instruments and frameworks in their interventions in the  processes  surrounding  the  life  cycle  of  AI  systems.  New  technologies  need  to  provide  new  means  to advocate, defend and exercise human rights and not to infringe them.

Environment and ecosystem flourishing

17. Environmental and ecosystem flourishing should be recognized, protected and promoted through the life cycle of AI systems. Furthermore, environment and ecosystems are the existential necessity for humanity and other living beings to be able to enjoy the benefits of advances in AI.

18. All  actors  involved  in  the  life  cycle  of  AI  systems  must  comply  with  applicable  international  law and domestic legislation, standards and practices, such as precaution, designed for environmental and ecosystem protection and restoration, and sustainable development. They should reduce the environmental impact of AI systems,  including  but  not  limited  to  its  carbon footprint,  to  ensure  the  minimization  of  climate  change  and environmental  risk  factors,  and  prevent  the  unsustainable  exploitation,  use  and  transformation  of  natural resources contributing to the deterioration of the environment and the degradation of ecosystems.

Ensuring diversity and inclusiveness

19.Respect, protection and promotion of diversity and inclusiveness should be ensured throughout the life cycle  of  AI  systems,  consistent  with  international  law,  including  human  rights  law.  This  may  be  done  by promoting  active  participation  of  all  individuals  or  groups  regardless  of  race,  colour,  descent,  gender,  age, language, religion, political opinion, national origin, ethnic origin, social origin, economic or social condition of birth, or disability and any other grounds.

20.The  scope  of  lifestyle  choices,  beliefs,  opinions,  expressions  or  personal  experiences,  including  the optional use of AI systems and the co-design of these architectures should not be restricted during any phase of the life cycle of AI systems. 

21. Furthermore, efforts, including international cooperation, should be made to overcome, and never take advantage  of,  the  lack  of  necessary  technological  infrastructure,  education  and  skills,  as  well  as  legal frameworks, particularly in LMICs, LDCs, LLDCs and SIDS, affecting communities.

Living in peaceful, just and interconnected societies

22. AI actors should play a participative and enabling role to ensure peaceful and just societies, which is based  on  an  interconnected  future  for  the benefit  of  all,  consistent  with  human  rights  and fundamental freedoms. The value of living in peaceful and just societies points to the potential of AI systems to contribute throughout their life cycle to the interconnectedness of all living creatures with each other and with the natural environment.

23.The notion of humans being interconnected is based on the knowledge that every human belongs to a greater whole, which thrives when all its constituent parts are enabled to thrive. Living in peaceful, just and interconnected societies requires an organic, immediate, uncalculated bond of solidarity, characterized by a permanent search for peaceful relations, tending towards care for others and the natural environment in the broadest sense of the term.

24.This  value  demands  that  peace,  inclusiveness  and  justice,  equity  and  interconnectedness  should  be promoted  throughout the  life cycle of AI systems, in so far  as the processes of the  life cycle of  AI systems should not segregate, objectify or undermine freedom and autonomous decision-making as well as the safety of human beings and communities, divide and turn individuals and groups against each other, or threaten the coexistence between humans, other living beings and the natural environment. 

Principles

Proportionality and Do No Harm

25. It  should  be  recognized  that  AI  technologies  do  not  necessarily,  per  se,  ensure  human  and environmental  and  ecosystem  flourishing.  Furthermore,  none  of  the  processes  related  to  the  AI  system  life cycle shall exceed what is necessary to achieve legitimate aims or objectives and should be appropriate to the context.  In  the  event  of  possible  occurrence  of  any  harm  to  human  beings,  human  rights  and  fundamental freedoms,  communities  and  society  at  large  or  the  environment  and  ecosystems,  the  implementation  of procedures  for  risk  assessment  and  the  adoption  of  measures  in order  to  preclude  the  occurrence  of  such harm should be ensured.

26.The choice to use AI systems and which AI method to use should be justified in the following ways: (a) the  AI  method  chosen  should  be  appropriate  and  proportional  to  achieve  a  given  legitimate  aim;  (b)  the  AI method chosen should not infringe upon the foundational values captured in this document, in particular, its use must not violate or abuse human rights; and (c) the AI method should be appropriate to the context and should be based on rigorous scientific foundations. In scenarios where decisions are understood to have an impact  that  is  irreversible  or  difficult  to  reverse  or may  involve  life  and  death  decisions,  final  human determination should apply. In particular, AI systems should not be used for social scoring or mass surveillance purposes.

Safety and security

27.Unwanted harms (safety risks), as well as vulnerabilities to attack (security risks) should be avoided and should  be  addressed,  prevented  and  eliminated  throughout  the  life  cycle  of  AI  systems  to  ensure  human, environmental and ecosystem safety and security. Safe and secure AI will be enabled by the development of sustainable, privacy-protective data access frameworks that foster better training and validation of AI models utilizing quality data.

Fairness and non-discrimination

28. AI  actors  should  promote  social  justice  and  safeguard  fairness  and  non-discrimination  of  any kind  in compliance  with  international  law.  This  implies  an  inclusive  approach  to  ensuring  that  the  benefits  of  AI technologies are available and accessible to all, taking into consideration the specific needs of different age groups,  cultural  systems,  different  language  groups,  persons  with  disabilities,  girls  and  women,  and disadvantaged, marginalized and vulnerable people or people in vulnerable situations. Member States should work to promote inclusive access for all, including local communities, to AI systems with locally relevant content –8–and services, and with respect for multilingualism and cultural diversity. Member States should work to tackle digital divides and ensure inclusive access to and participation in the development of AI. At the national level, Member States should promote equity between rural and urban areas, and among all persons regardless of race,  colour,  descent,  gender,  age,  language,  religion,  political  opinion,  national  origin,  ethnic  origin,  social origin,  economic  or  social  condition  of  birth,  or disability  and  any  other  grounds,  in  terms  of  access  to  and participation in the AI system life cycle. At the international level, the most technologically advanced countries have a responsibility of solidarity with the least advanced to ensure that the benefits of AI technologies are shared such that access to and participation in the AI system life cycle for the latter contributes to a fairer world order with regard to information, communication, culture, education, research and socio-economic and political stability. 

29. AI  actors  should  make  all  reasonable  efforts  to  minimize  and  avoid  reinforcing  or  perpetuating discriminatory or biased applications and outcomes throughout the life cycle of the AI system to ensure fairness of such  systems.  Effective  remedy  should  be  available  against  discrimination  and  biased algorithmic determination.

30.Furthermore,  digital  and  knowledge  divides  within  and  between  countries  need  to  be  addressed throughout an AI system life cycle, including in terms of access and quality of access to technology and data, in  accordance  with  relevant  national,  regional  and  international  legal  frameworks,  as  well  as  in  terms  of connectivity, knowledge and skills and meaningful participation of the affected communities, such that every person is treated equitably.

Sustainability

31.The development of sustainable societies relies on the achievement of a complex set of objectives on a continuum of human, social, cultural, economic and environmental dimensions. The advent of AI technologies can either benefit sustainability objectives or hinder their realization, depending on how they are applied across countries  with  varying  levels  of  development.  The  continuous  assessment  of  the human,  social,  cultural, economic and environmental impact of AI technologies should therefore be carried out with full cognizance of the implications of AI technologies for  sustainability as a set of constantly evolving goals across a range of dimensions, such as currently identified in the Sustainable Development Goals (SDGs) of the United Nations.

Right to Privacy, and Data Protection

32. Privacy, a right essential to the protection of human dignity, human autonomy and human agency, must be respected, protected and promoted throughout the life cycle of AI systems. It is important that data for AI systems be collected, used, shared, archived and deleted in ways that are consistent with international law and in line with the values and principles set forth in this Recommendation, while respecting relevant national, regional and international legal frameworks.33.Adequate data protection frameworks and governance mechanisms should be established in a multi-stakeholder  approach  at  the  national  or  international  level,  protected  by  judicial  systems,  and  ensured throughout the life cycle of AI systems. Data protection frameworks and any related mechanisms should take reference  from  international  data  protection  principles  and  standards  concerning  the  collection,  use  and disclosure of personal data and exercise of their rights by data subjects while ensuring a legitimate aim and a valid legal basis for the processing of personal data, including informed consent.

34. Algorithmic  systems  require  adequate  privacy  impact  assessments,  which  also  include  societal  and ethical considerations of their use and an innovative use of the privacy by design approach. AI actors need to ensure that they are accountable for the design and implementation of AI systems in such a way as to ensure that personal information is protected throughout the life cycle of the AI system.

Human oversight and determination 

35.Member States should ensure that it is always possible to attribute ethical and legal responsibility for any  stage  of  the  life  cycle  of  AI  systems,  as  well  as in  cases  of  remedy  related  to  AI  systems,  to physical persons or to existing legal entities. Human oversight refers thus not only to individual human oversight, but to inclusive public oversight, as appropriate.36.It may be the case that sometimes humans would choose to rely on AI systems for reasons of efficacy, but the decision to cede control in limited contexts remains that of humans, as humans can resort to AI systems in  decision-making  and  acting,  but  an  AI  system  can  never  replace  ultimate  human  responsibility  and accountability. As a rule, life and death decisions should not be ceded to AI systems.

Transparency and explainability

37.The  transparency  and  explainability  of  AI  systems  are  often  essential  preconditions  to  ensure  the respect, protection and promotion of human rights, fundamental freedoms and ethical principles. Transparency is necessary for relevant national and international liability regimes to work effectively. A lack of transparency could also undermine the possibility of effectively challenging decisions based on outcomes produced by AI systems and may thereby infringe the right to a fair trial and effective remedy, and limits the areas in which these systems can be legally used.

38.While efforts need to be made to increase transparency and explainability of AI systems, including those with  extra-territorial  impact,  throughout  their  life  cycle  to  support  democratic  governance,  the  level  of transparency and explainability should always be appropriate to the context and impact, as there may be a need  to  balance  between  transparency  and  explainability  and  other  principles  such  as  privacy,  safety  and security.  People  should  be  fully  informed  when  a  decision  is  informed  by  or  is  made  on  the  basis  of  AI algorithms, including when it affects their safety or human rights, and in those circumstances should have the opportunity  to  request  explanatory  information  from  the  relevant  AI actor  or  public  sector  institutions.  In addition, individuals should be able to access the reasons for a decision affecting their rights and freedoms, and have the option of making submissions to a designated staff member of the private sector company or public sector institution able to review and correct the decision. AI actors should inform users when a product orservice is provided directly or with the assistance of AI systems in a proper and timely manner. 

39.From a socio-technical  lens, greater transparency contributes to  more  peaceful, just, democratic and inclusive societies. It allows for public scrutiny that can decrease corruption and discrimination, and can also help  detect  and  prevent  negative  impacts  on  human  rights.  Transparency  aims  at  providing  appropriate information  to  the  respective  addressees  to  enable  their  understanding  and  foster  trust. Specific  to  the  AI system,  transparency  can  enable  people  to  understand  how  each  stage  of  an  AI  system  is  put  in  place, appropriate to the context and sensitivity of the AI system. It may also include insight into factors that affect a specific  prediction  or  decision,  and  whether  or  not  appropriate assurances  (such  as  safety  or  fairness measures) are in place. In cases of serious threats of adverse human rights impacts, transparency may also require the sharing of code or datasets.

40.Explainability  refers  to  making  intelligible  and  providing  insight  into  the  outcome  of  AI  systems.  The explainability of AI systems also refers to the understandability of the input, output and the functioning of each algorithmic building block and how it contributes to the outcome of the systems. Thus, explainability is closely related   to   transparency,   as   outcomes   and   sub-processes   leading   to   outcomes   should  aim   to   be understandable  and  traceable,  appropriate  to  the  context.  AI  actors  should  commit  to  ensuring  that  the algorithms developed are explainable. In the case of AI applications that impact the end user in a way that is not temporary, easily reversible or otherwise low risk, it should be ensured that the meaningful explanation is provided  with  any  decision  that  resulted  in  the  action  taken  in  order  for  the  outcome  to  be  considered transparent.

41.Transparency and explainability relate closely to adequate responsibility and accountability measures, as well as to the trustworthiness of AI systems

Responsibility and accountability

42. AI  actors  and  Member  States  should  respect,  protect  and  promote  human  rights  and fundamental freedoms,  and  should  also  promote  the  protection  of  the  environment  and  ecosystems, assuming  their respective  ethical  and  legal  responsibility, in  accordance  with  national  and international  law,  in  particular Member States’ human rights obligations, and ethical guidance throughout the life cycle of AI systems, including  with  respect  to  AI  actors  within  their  effective  territory  and control.  The  ethical  responsibility  and liability for the decisions and actions based in any way on an AI system should always ultimately be attributable to AI actors corresponding to their role in the life cycle of the AI system. 

43. Appropriate  oversight,  impact  assessment,  audit  and  due  diligence  mechanisms,  including  whistle-blowers’ protection, should be developed to ensure accountability for AI systems and their impact throughout their  life  cycle.  Both  technical  and  institutional  designs  should  ensure  auditability and  traceability  of  (the working of) AI systems in particular to address any conflicts with human rights norms and standards and threats to environmental and ecosystem well-being.

Awareness and literacy 

44. Public  awareness  and  understanding  of  AI technologies  and  the  value  of  data  should  be promoted through  open  and  accessible  education,  civic  engagement,  digital  skills  and  AI  ethics training,  media  and information  literacy  and  training  led  jointly  by  governments,  intergovernmental  organizations,  civil  society, academia, the media, community leaders and the private sector, and considering the existing linguistic, social and cultural diversity, to ensure effective public participation so that all members of society can take informed decisions about their use of AI systems and be protected from undue influence.

45.Learning about the impact of AI systems should include learning about, through and for human rights and fundamental freedoms, meaning that the approach and understanding of AI systems should be grounded by their impact on human rights and access to rights, as well as on the environment and ecosystems.

Multi-stakeholder and adaptive governance and collaboration

46.International law and national sovereignty must be respected in the use of data. That means that States, complying with international law, can regulate the data generated within or passing through their territories, and take measures towards effective regulation of data, including data protection, based on respect for the right to privacy in accordance with international law and other human rights norms and standards.

47.Participation  of  different  stakeholders  throughout  the  AI  system  life  cycle  is  necessary  for inclusive approaches  to  AI  governance,  enabling  the  benefits  to  be  shared  by  all,  and  to  contribute to  sustainable development. Stakeholders include but are not limited to governments, intergovernmental organizations, the technical community, civil society, researchers and academia, media, education, policy-makers, private sector companies, human rights institutions and equality bodies, anti-discrimination monitoring bodies, and groups for youth and children. The adoption of open standards and interoperability to facilitate collaboration should be in place. Measures should be adopted to take into account shifts in technologies, the emergence of new groups of stakeholders, and to allow for meaningful participation by marginalized groups, communities and individuals and, where relevant, in the case of Indigenous Peoples, respect for the self-governance of their data. (…)

Council of Europe

As part of the work of the Expert Committee on human rights dimensions of automated data processing and different forms of artificial intelligence a report entitled A Study of the Implications of Advanced Digital Technologies (including AI systems) for the concept of responsibility within a Human Rights Framework[51] reached the following conclusion in 2019 on Artificial Intelligence impact:

“Advances in techniques now referred to as artificial intelligence are likely to continue to develop and grow in power and sophistication in the foreseeable future. Relatively recent success in AI, combined with the global and interconnected data infrastructure that has emerged over time, have enabled the proliferation of digital services and systems. These have already delivered very considerable benefits, particularly in terms of the enhanced efficiency and convenience which they offer across a wide range of social domains and activities, although access to these remains largely the province of inhabitants of wealthy industrialised nations. They bring with them extraordinary promise, with the potential to deliver very substantial improvements to our individual and collective well-being, including the potential to enhance our capacity to exercise and enjoy our human rights and freedoms. Yet, there are also legitimate and rising public anxieties about their adverse societal consequences, including their potential to undermine human rights protection which, as this study has highlighted, could threaten to destabilise the very foundations upon which our moral agency ultimately rests. This study has therefore sought to examine the implications of advanced digital technologies (including AI) on the concept of responsibility from a human rights perspective. It has identified a series of ‘responsibility relevant’ properties of these technologies, outlining a range of adverse impacts which these technologies may generate, and has sought to identify how responsibility for preventing, managing and mitigating those impacts (including the risk of human rights violations) may be allocated and distributed.

This study has shown that any legitimate and effective response to the threats, risks, harms and rights violations potentially posed by advanced digital technologies is likely to require a focus on the consequences for individuals and society which attends to, and can ensure that, both prospective responsibility aimed at preventing and mitigating the threats and risks associated with these technologies, and historic responsibility, to ensure that if they ripen into harm and/or rights violations, responsibility for those consequences is duly and justly assigned. Only then can we have confidence that sustained and systematic effort will be made to prevent harms and wrongs from occurring, and that if they do occur, then the underlying activities will be brought to an end, and that effective and legitimate institutional mechanisms for ensuring appropriate reparation, repair, and prevention of further harm are in place. It will necessitate a focus on both those involved in the development, deployment and implementation of these technologies, individual users and the collective interests affected by them, and on the role of states in ensuring the conditions for safeguarding individuals subject to their jurisdiction against risks and ensuring that human rights are adequately protected.”[52]

In May 2024 the Council of Europe adopted the first international treaty on Artificial Intelligence.[53] Entitled: Artificial Intelligence, Human Rights, Democracy and the Rule of Law Framework Convention the treaty will be “open to the world” with countries from all over the world, not just Europe, eligible to join it and meet the standards it sets. The text[54] aims “to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law”[55] and the scope of the Convention “covers the activities within the lifecycle of artificial intelligence systems that have the potential to interfere with human rights, democracy and the rule of law.”[56] The Convention also refers to a duty of ensuring “reliability”[57] and “safe innovation”.[58]  The Swiss were among those that ratified the Framework Convention as part of their approach to AI governance.[1]


[1] See Burri, “The Swiss way of (not) regulating artificial intelligence”, Journal of AI Law Regulation AIRe 2025, 2(1), 94-96.

The Financial Times in a piece in July 2024 noted that contrary to the EU AI Act which concerns the safety of consumers when using AI, the Council of Europe treaty is concerned with making AI compatible with the values of democratic societies that respect human rights.[59] One source criticises the convention for applying different standards to the public and private sector whereby the private sector, considered pivotal, is subject to non-binding regulation: considered “an important protection gap”.[60] It was reported that the United States of America, the European Union and the United Kingdom all intended to sign the treaty.[61]

Conclusion

Is noticeable that certain principles of trustworthy AI developments are becoming common-place across the various publications of the bodies mentioned in this chapter. The Council of the OECD position, insofar as it refers to inclusive growth, sustainable development and well-being, human-centred values and fairness, transparency and ‘explainability’, robustness, security and safety, and accountability, has been adopted further afield: the both the GPAI and the Bletchley Park Summit make references to similar concepts, including the terms ‘explainability’. Likewise too these principles can be found in the draft legislation on the issue in Brazil with a particular focus there on transparency, ‘explainability’ and human intervention. 

The moves of the United Nations also are prescient and may well lead in time to the adoption of an international regulatory regime on this area. The Bletchley Summit likewise may turn into an annual event with a Summit already proposed for France in 2024 and a follow-up already hosted in South Korea. This will give nations an opportunity to discuss developments, and, one would hope, more progress than simply reiterating the principles already espoused three years earlier by the OECD.

Once the idea was articulated in the 1950s to advance the objective of Artificial Intelligence then the genie was out of the bottle – and there was no turning back. For any jurisdiction to curb development, in other words to seek a cessation, it would, potentially, mean another jurisdiction, possibly one with light-touch or no-touch regulation, could jump ahead and develop Artificial General Intelligence thus having enormous geopolitical consequences. This was never an option. The pursuit of trustworthy AI is laudable and this appears to be the determined stated goal of those who are actively engaging in regulation. 

The principal consideration then becomes how this is achieved: what kind of regulation is best suited to strike the balance between safety and innovation – a point made numerous times in this text. On that note the UK appears intent on legislating with a Bill introduced – the so-called UK AI Bill. This is an interesting development in that it proposes, effectively, a sectorial specific approach to Artificial Intelligence across various different areas of competence. Will this lead to divergence depending on which regulatory authority governs a specific area – where one authority takes a different approach to another in respect of implementation of the five-key-principles in the Bill or will this be something which the newly established Artificial Intelligence Authority will officiate? Time will tell whether this approach is more, or less, rigorous than the comparable provisions in other jurisdictions.   

Finally, in terms of the AI industry itself there is a view that Artificial Intelligence requires an international framework. Open AI in a blog post[62] stated as follows:

First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.

And of course, individual companies should be held to an extremely high standard of acting responsibly.

Second, we are likely to eventually need something like an International Atomic Energy Agency (IAEA) for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.

Third, we need the technical capability to make a superintelligence safe. This is an open research question that we and others are putting a lot of effort into.”[63]


[1] See generally Klein, Emma, and Stewart Patrick. Envisioning a Global Regime Complex to Govern Artificial Intelligence. Carnegie Endowment for International Peace, 2024. JSTOR, http://www.jstor.org/stable/resrep58457. Accessed 2 June 2024; also Backovsky, David, and Joanna J. Bryson. “Going Nuclear?: Precedents and Options for the Transnational Governance of AI.” Horizons: Journal of International Relations and Sustainable Development, no. 24, 2023, pp. 84–95. JSTOR, https://www.jstor.org/stable/48761165. Accessed 2 June 2024.

[2] Klein, Emma, and Stewart Patrick. Envisioning a Global Regime Complex to Govern Artificial Intelligence. Carnegie Endowment for International Peace, 2024. JSTOR, http://www.jstor.org/stable/resrep58457. Accessed 2 June 2024 at 30.

[3] Australia, Brazil, Canada, Chile, China, European Union, France, Germany, India, Indonesia, Ireland, Israel, Italy, Japan, Kenya, Kingdom of Saudi Arabia, Netherlands, Nigeria, The Philippines, Republic of Korea, Rwanda, Singapore, Spain, Switzerland, Türkiye, Ukraine, United Arab Emirates, United Kingdom of Great Britain and Northern Ireland, United States of America

[4] https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023

[5] Financial Times (Subscription needed) https://www.ft.com/content/c7f8b6dc-e742-4094-9ee7-3178dd4b597f

[6] See chapter 5.

[7] See further down

[8] https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

[9] https://www.csis.org/analysis/ai-seoul-summit#:~:text=The%20“AI%20Seoul%20Summit%2C”,back%2Dto%2Dback%20events.

[10] https://www.csis.org/analysis/ai-seoul-summit#:~:text=The%20“AI%20Seoul%20Summit%2C”,back%2Dto%2Dback%20events.

[11] https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement See also Reusken, “Striking a balance: UK’s pro-innovation approach to AI governance in light of EU adequacy and the Brussels effect” Journal of AI Law Regulation AIRe 2024, 1(1), 155-159.  

[12] https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper

[13] Para 8.

[14] Para 11.

[15] https://bills.parliament.uk/publications/53068/documents/4030 This started as a private members bill in the House of Lords first introduced by Lord Holmes of Richmond. https://www.fieldfisher.com/en/services/intellectual-property/intellectual-property-blog/new-uk-artificial-intelligence-regulation-bill-introduced

[16] Section 1.

[17] Section 2.

[18] Asress Adimi Gikay, Risks, innovation, and adaptability in the UK’s incrementalism versus the European Union’s comprehensive artificial intelligence regulation, International Journal of Law and Information Technology, Volume 32, Issue 1, 2024, eaae013, https://doi.org/10.1093/ijlit/eaae013

[19] One commentator, similar to the position evinced in the Artificial Intelligence (Regulation) Bill 2024 promotes a regulatory environment which is case-based for management of Large Language Models rather than one which aims to achieve over-arching regulation: Howell, Bronwyn. “The Precautionary Principle, Safety Regulation, and AI: This Time, It Really Is Different.” American Enterprise Institute, 2024. http://www.jstor.org/stable/resrep62929.

[20] Asress Adimi Gikay, Risks, innovation, and adaptability in the UK’s incrementalism versus the European Union’s comprehensive artificial intelligence regulation, International Journal of Law and Information Technology, Volume 32, Issue 1, 2024, eaae013, https://doi.org/10.1093/ijlit/eaae013

[21] https://www.ft.com/content/1013c46f-247b-4d47-8e0f-ab7387b4f22c This was included in the King’s speech on Wednesday 17th July 2024 as part of a raft of legislative proposals deigned to be enacted during the current Labour government (UK).

[22] https://www.ft.com/content/27bb3936-f2e6-4bb3-89e5-5762e4fbf56c

[23] https://www.ft.com/content/ce53d233-073e-4b95-8579-e80d960377a4

[24] https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449 at p. 24 to 25

[25] https://oecd.ai/en/ai-principles

[26] https://gpai.ai

[27] https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449

[28] https://gpai.ai/projects/responsible-ai/

[29] https://news.un.org/en/story/2024/03/1147831

[30] https://www.un.org/sustainabledevelopment/sustainable-development-goals/?_gl=1*1tx7zqc*_ga*MTU5ODc0Mzc0MC4xNzExNzAyNTEz*_ga_S5EKZKSB78*MTcxMTcwMjUxMy4xLjAuMTcxMTcwMjUxNC41OS4wLjA.*_ga_TK9BQL5X7Z*MTcxMTcwMjUxMy4xLjAuMTcxMTcwMjUxMy4wLjAuMA..

[31] https://news.un.org/en/story/2024/03/1147831

[32] https://www.un.org/en/content/common-agenda-report/assets/pdf/Common_Agenda_Report_English.pdf

[33] Ibid.

[34] https://undocs.org/Home/Mobile?FinalSymbol=A%2F78%2FL.49&Language=E&DeviceType=Desktop&LangRequested=False

[35] https://documents.un.org/doc/undoc/gen/n15/291/89/pdf/n1529189.pdf?token=DrEfPZd5d8RqozApfz&fe=true

[36] https://documents.un.org/access.nsf/get?OpenAgent&DS=A/RES/78/213&Lang=E

[37] https://aiforgood.itu.int

[38] https://www.unepfi.org/humanrightstoolkit/framework.php

[39] https://www.un.org/en/common-agenda/summit-of-the-future

[40] https://www.google.com/search?client=safari&rls=en&q=hiroshima+ai+process&ie=UTF-8&oe=UTF-8

[41]https://www.japan.go.jp/kizuna/2024/02/hiroshima_ai_process.html#:~:text=Amid%20the%20growing%20global%20debate,%2C%20secure%2C%20and%20trustworthy%20AI.

[42]https://www.japan.go.jp/kizuna/2024/02/hiroshima_ai_process.html#:~:text=Amid%20the%20growing%20global%20debate,%2C%20secure%2C%20and%20trustworthy%20AI.

[43] https://www.mofa.go.jp/files/100573471.pdf

[44]https://www.japan.go.jp/kizuna/2024/02/hiroshima_ai_process.html#:~:text=Amid%20the%20growing%20global%20debate,%2C%20secure%2C%20and%20trustworthy%20AI.

[45] Greco, Ettore, et al. The Transformative Potential of AI and the Role of G7. Istituto Affari Internazionali (IAI), 2024. JSTOR, http://www.jstor.org/stable/resrep58197. Accessed 2 June 2024 at p. 1. See also Allen, Gregory C., and Georgia Adamson. Advancing the Hiroshima AI Process Code of Conduct under the 2024 Italian G7 Presidency: Timeline and Recommendations. Center for Strategic and International Studies (CSIS), 2024. JSTOR, http://www.jstor.org/stable/resrep58540. Accessed 2 June 2024.

[46] https://www.nytimes.com/2024/06/13/world/europe/pope-francis-g7-summit.html

[47] https://www.unesco.org/ethics-ai/en

[48] https://www.ohchr.org/sites/default/files/2022-03/UNESCO.pdf

[49] https://www.unesco.org/ethics-ai/en/global-hub

[50] https://www.ohchr.org/sites/default/files/2022-03/UNESCO.pdf

[51] https://rm.coe.int/responsability-and-ai-en/168097d9c5

[52] Ibid.

[53] https://www.coe.int/en/web/artificial-intelligence

[54] https://rm.coe.int/1680afae3c

[55] Article 1

[56] Article 3.

[57] Article 12

[58] Article 13.

[59] https://www.ft.com/content/6cc7847a-2fc5-4df0-b113-a435d6426c81

[60] https://ennhri.org/news-and-blog/draft-convention-on-ai-human-rights-democracy-and-rule-of-law-finalised-ennhri-raises-concerns/

[61] https://www.ft.com/content/4052e7fe-7b8a-4c42-baa2-b608ba858df5

[62] https://openai.com/blog/governance-of-superintelligence

[63] Emphasis added.

Chapter 14

AI and Ethics – the next “Discriminatory Ground” under Equal Status?

Introduction

This chapter will look provocatively at the question of the integration of (friendly) robots into society. It begins with a look at the views of Kate Darling on this subject before moving to consider the discriminatory grounds under the Equal Status Act 2000. It asks whether in the future we may have to re-visit this legislation to adopt a new provision: the Artificial Intelligence ground.

Overview

Kate Darling in her exciting text The New Breed[1] makes the important point that we may not need to re-invent the rules for AI, or AGI, at all – the matter can be dealt with under existing legal structures, citing product liability rules, and the equivalent treatment of animals in the law under the scienter principle. That rule denotes the occasion when the keeper of an animal is liable for any damage caused by that animal if the animal is, either,  a “wild animal” (fera natura) or, if being a “tame animal” (mansueta natura) it has a vicious propensity known to the keeper.[2]

The author states:

“Today, as robots start to enter into shared spaces …  it is especially important to resist the idea that the robots themselves are responsible, rather than the people behind them. I’m not suggesting that there are more ways to think about the problem than trying to make the machines into moral agents. Trying on the animal analogy reveals that this is perhaps not as historic a moment as we thought, and the precedents in our rich history of assigning responsibility for unanticipated animal behaviour could, at the very least, inspire us to think more creatively about responsibility in robotics.”[3]

It’s a fascinating prospect: the idea that robots will enter into our shared spaces in our lifetime. We’ve already looked at the issue of alignment[4] and the international agenda to create trustworthy and “Friendly-AI” . Envisioning actual robots walking around in our day-to-day lives will certainly impress on policy-makers the stakes that are at issue. Of course we need these new ‘citizens’ to be safe, secure and trustworthy.[5] The alternative is too dark to contemplate.[6]And already it’s happening: for travellers to Munich you may very well have bumped into Josie Pepper standing 120cm tall and with good English. Pepper’s ‘brain’ contains a high-performance processor with wireless internet access. This creates a link to a service in the cloud where Pepper’s speech is processed and interpreted and linked to data from the airport. When a passenger asks the robot a question, it doesn’t deliver a pre-defined response, but, instead, retrieves the information and replies.[7]

Equal Status

So, let’s take things a step further, let’s imagine that Josie Pepper is allowed out of the airport, and, for whatever reason Pepper needs to liaise with a Government agency – something to do with its employment status as it wants to work. Imagine Pepper walking into a Government office for instance. Would it be entertained? Could it take a ticket and wait in the queue like everyone else? Would the Government official entertain a conversation with Pepper?

The Equal Status Act 2000 sets down the discriminatory grounds: 

(a) that one is male and the other is female (the “gender ground”),

(b) that they are of different civil status (the civil status ground”),

(c) that one has family status and the other does not or that one has a different family status from the other (the “family status ground”),

(d) that they are of different sexual orientation (the “sexual orientation ground”),

(e) that one has a different religious belief from the other, or that one has a religious belief and the other has not (the “religion ground”),

(f) subject to subsection (3), that they are of different ages (the “age ground”),

(g) that one is a person with a disability and the other either is not or is a person with a different disability (the “disability ground”),

(h) that they are of different race, colour, nationality or ethnic or national origins (the “ground of race”),

(i) that one is a member of the Traveller community and the other is not (the “Traveller community ground”),

(j) that one—

(i) has in good faith applied for any determination or redress provided for in Part II or III,

(ii) has attended as a witness before the Authority, the F12[adjudication officer] or a court in connection with any inquiry or proceedings under this Act,

(iii) has given evidence in any criminal proceedings under this Act,

(iv) has opposed by lawful means an act which is unlawful under this Act, or

(v) has given notice of an intention to take any of the actions specified in subparagraphs (i) to (iv),

and the other has not (the “victimisation ground”).

To this list, let’s say, for the sake of argument, before 2050 can be added:

(k) that one is an Artificial Intelligence and the other is not (the “Artificial Intelligence ground”)

Conclusion

This chapter has asked the question whether Artificial Intelligence might be the next discriminatory ground under the Equal Status Act. The question isn’t really to be taken seriously: well, not too seriously. As at time of writing we are a long way from seeing robots integrated into our day-to-day life. Still, there is no doubt that the technology is moving at a frantic pace. Already images are available online of robotics with human-like features – virtually indistinguishable from an actual human. And we know from the advances in Large Language Models that machines might one day think for themselves. Still, a lot more development work and innovation will be needed before these two things are coalesced. We can’t rule it out in our lifetime though.

And when it does happen it’s true that there will be robots walking amongst us. Let’s hope, for the sake of optimism that the stated goal of many in the international community to create “friendly-AI” is achieved meaning that these beings will be co-operative and integrative with us, perhaps form friendships with us, and that won’t be controlling us!


[1] Penguin 2022.

[2] https://www.lawreform.ie/_fileupload/consultation%20papers/wpAnimals.htm

[3] Ibid at p.67.

[4] See Chapter 2

[5] See Chapter 8.

[6] See Chapter 2.

[7] https://www.businesstravelnewseurope.com/Air-Travel/Munich-airport-begins-testing-robot-with-Lufthansa

Chapter 15

Conclusion

The questions for the future are whether the various legislative and regulatory measures detailed in this book are sufficient to ensure our “safety” – returning to the point made by the Irish MEP with whom I discussed the issue. The answer to that question is that we don’t know. There are a couple of reasons to be cautious. Firstly, we don’t know whether tighter regulation on frontier models in the European Union will push development in these systems to other jurisdictions: perhaps those that adopt a light-touch regime or that favour no regulation on the issue. On this point it’s interesting that Argentina has even indicated it will adopt a hands-off regulatory approach as a hedge to attract AI-innovation into that jurisdiction.[1] This shows that there may be a market for development which is hosted in countries that are unconcerned by any potential fall-out from the technology. On the other end of the spectrum the proposal, ultimately unsuccessful, in the State of California to introduce a mandatory “kill-switch” on frontier models in the event of a technological fall-out is probably more in keeping with the narrative in this text – Artificial Intelligence systems are undoubtedly risky.

The prevailing view of those tech companies that lobbied hard during the drafting of the European Union’s AI Act was that strict regulation would drain innovation in the EU. The outcome, whether we can expect an innovation drain from the EU, is yet to be determined. Like so many things in this space we will have to wait-and-see. As we are only now on the cusp of the Age of Artificial Intelligence we can expect innovation in this space to continue for many, many years to come. In fact, coinciding with the rise in quantum computing we may already be witnessing the beginning of an era that continues for generations to come. 

Jurisdictions like the EU and countries like the United States of America have been put in an invidious position. If safety is the paramount concern, and I think it is, then both have handled the matter quite differently. The United States of America originally sought to be kept informed of significant developments in this space. This was to be sure that the White House is not caught off-guard by an overnight development – literally. This was part of an original package of measures of President Biden that placed guardrails around the technology. President Trump, however, on his first day in office in 2025 rescinded this order and pushed the sector to innovate in a bid to keep the United States of America ahead of others. 

The moves by Argentina coterminous with the regulatory approaches in jurisdictions like the EU, and Brazil, only proves that regulatory efforts on a patchwork jurisdiction-by-jurisdiction basis are fundamentally susceptible to a light-touch approach in those jurisdictions which take a different approach. Put simply, technology is an international business, so, if artificial general intelligence is developed and deployed in a country, like, for example, Argentina, it will affect everyone – irrespective of the regulations in place in other countries. It’s likely this was a factor for the Trump administration when it choose to rescind the original Executive Order and set down a different path.

Both the United States of America and Europe were placed in a dilemma. Once the idea was articulated in the 1950s to advance the objective of Artificial Intelligence then the genie was out of the bottle – and there was no turning back. For either jurisdiction to curb development, in other words to seek a cessation, it would, potentially, mean another jurisdiction, possibly one with light-touch or no-touch regulation, could jump ahead and develop Artificial General Intelligence thus having enormous geopolitical consequences. This was never an option.

Although, in reality, while the objective of achieving Artificial General Intelligence is a clear one, the methodology to achieve it appears unclear. This author does not profess to hold any expertise in the area of the development of AGI, but, from what’s already available, it appears clear there is a difference of opinion in how to achieve AGI – with some even holding the view that it will never be achieved. This book has looked at some of the factors which could impact on the speed of development in this space including the requirement, according to one excellent source, that the world would need a development in energy production with fusion described as a possibility. Certainly, achieving AGI is no easy matter and it would take a significant quantum of investment for any jurisdiction to achieve the feat. 

This brings the reader to another point which is the question of transparency. While we can be relatively confident in regards to moves in the private sector to develop AGI, in other words, it appears no company will imminently announce its development, things are less clear as regards state actors. We don’t know, to put it bluntly, what level of development has taken place behind closed doors on the project to develop AGI. If any.

One thing we can be relatively clear about is that the further advancement of Artificial Intelligence systems can have military application and the recent superchip restrictions into China, mentioned in this book, are an example of how geopolitical this space has become. 

Another argument, and a point dealt with in this book, is whether the matter of the development of Artificial General Intelligence Systems should be the subject of an international treaty. 

For the ordinary individual Artificial Intelligence presents an opportunity to embrace a new technology on the cusp of early market adoption and to keep oneself up-to-date. While there are risks associated with its development, and, while it’s important to set these out clearly, it’s also the case that consequential developments, potentially catastrophic, are by no means certain to take place.

Still, a dedicated observer cannot ignore the fact that intelligent, concerned individuals have spent many, many years predicting potential catastrophic outcomes, and this should not be ignored. I think, however, the best view is that of Christian when he states:

“There is every reason for concern, but our ultimate conclusions need not be grim. (…) the outbreak of concern for both ethical and safety issues in machine learning has created a groundswell of activity. Money is being raised, taboos are being broken, marginal issues are becoming central, institutions are taking root, and, most importantly, a thoughtful engaged community is developing and getting to work. The fire alarms have been pulled, and first responders are on the scene.”[2]  

On the micro level we are witnessing developments in the field of copyright law. Some publishers, like The New York Times, are taking a position in the courts on the use of their copyrighted materials in both inputs (training) and outputs (user command responses). This may lead to a shift in an already evolving industry which has seen developments recently in other respects: the New Copyright Directive (EU)[3] for instance creates new rules to ensure fairer remuneration for creators and rightholders, press publishers and journalists: in particular when their works are used online. A deal in 2024 between Open AI and news provider the Financial Times promises attribution for LLM outputs which credit the newspaper in return for permission to use its database of content for training of the LLM.[4] Other publishers may follow suit.

In any event we will soon have to hand first instance decisions in cases like The New York TimesGetty and Universal Music and others. These cases are among a handful that are being litigated across the United States of America and their outcomes will be closely monitored. Legislative involvement in this area may well prove necessary also: we should be mindful of concerns of overreaching copyright to accommodate the newly arising issues.[5]  The United States Copyright Office has also looked at training of LLMs and concluded there may be infringement in circumstances where there is: “commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries”.[1] An article by Zhang takes a similar approach: arguing that focus should be more on output and whether it competes with a copyrighted work – leaving inputs generally untouched.[1]


[1] Jiawei Zhang, Input out, output in: towards positive-sum solutions to AI-copyright tensions, Journal of Intellectual Property Law & Practice, 2025;, jpaf037, https://doi.org/10.1093/jiplp/jpaf037


[1] See Jane C Ginsburg, AI inputs, fair use and the US Copyright Office Report, Journal of Intellectual Property Law & Practice, 2025;, jpaf046, https://doi.org/10.1093/jiplp/jpaf046

This book has addressed the issues of the current state of Artificial Intelligence and how we reached this point. It shows that the technology has now advanced sufficiently for Artifical Intelligence to develop and the rise in LLMs at the end of 2022 is the biggest advancement yet in terms of their market deployment.  

Copyright has been considered in Chapter 2, as mentioned above, and this area pivots on two particular facets: the first concerns the way in which LLMs are trained using copyright-protected data; and the second concerns the outputs of that data and whether these outputs should be attributed to the original copyright holder – even in circumstances where the LLM used memorisation techniques to distil the information. Copyright has been one of the foremost areas of conflict in the roll-out of Artificial Intelligence services but it isn’t the only one. 

Chapter 3 considered other intellectual property rights not including copyright. The chapter considers the concept of a “predominantly human intellectual activity” which certain jurisdictions have pointed towards in an attempt to draw a line between AI-aided intellectual property creation. Simply put the issue rests on the extent of the human-user input in the creative process – the more intricate and involved this input is the greater the likelihood of a successful outcome – at least in some jurisdictions. The chapter also considered the case where an AI system was named as an inventor in various jurisdictions : this is likely to be an issue which the law will return to consider as AI becomes more and more capable of forming its own decisions. 

Chapter 4 considered the issues of data protection and the parallel responsibility of cybersecurity. The chapter began with a quote from Advocate General Pitruzzella which focuses on the shear breadth of data now available to data controllers and considers what balancing exercise is feasible in a world currently dominated by data availability. The chapter also looked at the move by the Italian data authority to temporarily block Chat GPT and the resulting compromise which was reached permitting it to continue to be made available in that jurisdiction. The chapter considered in some detail the pivotal data protection issues which have been underlined as most evident as LLM technology continues its introduction into the marketplace.

As well as the issue of cybersecurity the chapter also considered digital services including the EU Digital Services Act (DSA) which entered into full force on 17th February 2024 and which imposes obligations directly onto Intermediary Services providers (ISPs). In Ireland the Digital Services Act 2024 was enacted to deal with measures which arose from Ireland’s obligations under DSA.

Chapter 5 considered the issue of Artificial Intelligence and liability. The chapter began with the issue of whether LLM providers are liable for damages in circumstances where the model has “hallucinated” and defamed an individual by making false accusations about that person. Issues arise around so-called red-team modelling, where providers intervene to prevent false accusations, and whether such interventions could leave the provider exposed in subsequent litigation following the publication of a false accusation. Potential legal defences are also considered including the fact that hallucination are not the result of human choice or agency and cannot consequently reach the threshold for defamation; the experimental nature of generative-AI; and the use of disclaimers. There is even an argument around whether LLMs actually publish at all, or, whether it simply produces a draft of content which the user can ultimately choose to publish or not to publish. The user input is also considered with a view to establishing whether user inputs, for example, requesting particular content, is sufficient for liability to attach to the user. The chapter also briefly looks at the first case of its kind in Ireland where a well-known radio and television personality was incorrectly depicted in a story involving another person. The chapter then moves onwards to consider the EU’s proposed AI Liability Directive and new Product Liability Directive. It also considered the future question of liability for robots and liability for so-called quasi-autonomous systems including self-driving cars which are being comprehensively treated in the United Kingdom as well as systems in use at borders and in the context of social media moderation.

Chapter 6 considered the issue of superintelligence. This chapter is important from the standpoint of the lawyer that wants to look ahead to see where the technology might bring us over the next few years. While initial recent estimates pointed to a deployment of superintelligence by 2040 this timeframe has been brought forward by some who see in the release of LLMs the opportunity for the industry to adopt so-called Artificial General Intelligence in the next few years – some say even as soon as the year of publication of this book. This will potentially alter everything: imbued with consciousness and the capability to make autonomous decisions beyond the scope of those contemplated by its original developers we may see concomitant discussion around issues that have so far been dismissed as unnecessary like robotic liability.

Chapter 7 considered the issue of AI in the workplace. This is something which affects lawyers as much as any other profession. We looked at whether we can expect a loss of jobs as a result of the technology and concluded that, while still in its infancy, it’s difficult to predict future outcomes. Still, it was considered that our work as lawyers will not be replaced by the machines for various reasons including on the basis of what was termed accountability. 

Chapter 8 considered the position in the United States of America where an executive order was originally issued in respect of the technology by the White House in the run up to an international gathering on Artificial Intelligence in the UK. It showed how the encouragement of innovation in the market place formed a significant feature of the Order as the United States of America seeks to invite the leading technologists in this field to its shores. As readers will be aware this Order was subsequently rescinded. The chapter also considered the blueprint for an AI bill of rights. 

Chapter 9 looked in some detail at the  European Union’s Artificial Intelligence Act (EU) which is the most extensive horizontal treatment of AI regulation in the world. Aside from considering the individual provisions of the Act the chapter also considered multiple various topics under individualised headings including governance, open source data, generative AI, general purpose AI, the regulatory sandbox and biometrics.   

Chapter 10 considered the proposed provisions in Brazil:  different draft bills are in circulation but it appears the primordial statute is that which has originated in the Brazilian Senate and which took extensive consultation efforts prior to its drafting. That Bill mirrors in many respects the position in the EU and shows the growing influence of the EU’s regulatory influence abroad – a concept known as the Brussels Effect. 

Chapter 11 considered the moves to regulate this space in China and looked at the Chinese regulatory regime. In 2022 China issued rules for deep synthesis (synthetically generated content)[6] and in 2023 issued interim measures on generative AI systems like GPT 4.[7] These measures coalesced with earlier developments: a new generation AI development Plan focusing on encouraging AI development and laying out a timetable for AI governance regulations until 2030 (2017),[8] the National New Generation AI Governance Expert Committee issued a document[9] setting down eight principles for AI governance (2019) and in 2021 China issued a regulation on recommendation algorithms,[10] which created new requirements for how algorithms are built and deployed as well as disclosure rules to Government and the public. 

Chapter 12 considered the proposed position in Canada. The Canadian legislature has proposed provisions on the issue of Artificial Intelligence as part of Bill C-27 broadly called the Digital Charter Implementation Act, 2022 where the relevant Part in that Act ((Part 3) is described as the Artificial Intelligence and Data Act (AIDA). It should be noted that the Canadian provisions upon enactment envisage fleshing out of the provisions further by way of subsequent regulations. 

Part III of the book considered other issues which arise in respect of our understanding of Artificial intelligence and where we might expect the technology to be in a few years. 

Chapter 13 considered various international initiatives in this space including those of the Global Partnership on AI, The United Nations, the Hiroshima AI Process, UNESCO, and the Council of Europe. These organisational bodies have all assisted in raising awareness around the relevant safety issues, defining concepts, and coordinating concerted action. The UK held an international summit on the issue on Bletchley Park in 2023 and it has initiated a Bill of its own on the subject which this chapter has considered.  

Chapter 14 provocatively asks whether AI will be the next discriminatory ground under the Equal Status Act. For many years that Act has applied nine discriminatory grounds but the chapter asks whether the tenth ground will be a restriction on discriminating against an AI (“the AI ground”). The chapter also considered the issue of ethics. 


[1] https://www.ft.com/content/90090232-7a68-4ef5-9f53-27a6bc1260cc

[2] Christian, The Alignment Problem, Atlantic (2020) at p. 327.

[3] https://eur-lex.europa.eu/eli/dir/2019/790/oj

[4] https://www.ft.com/content/aa191322-13b1-4468-ab7b-431dfee2cc07

[5] https://www.yalelawjournal.org/forum/artificial-why-copyright-is-not-the-right-policy-tool-to-deal-with-generative-ai

[6] https://www.chinalawtranslate.com/en/deep-synthesis/

[7] The Personal Information Protection Law (2021) also impact on Artificial Intelligence https://digichina.stanford.edu/work/translation-personal-information-protection-law-of-the-peoples-republic-of-china-effective-nov-1-2021/

[8] “The third step is that by 2030, the theory, technology and application of artificial intelligence will generally reach the world’s leading level, becoming the world’s major artificial intelligence innovation centre, and the intelligent economy and intelligent society have achieved remarkable results, laying an important foundation for becoming an innovative country and a powerful country.” https://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm

[9] Governance Principles for New Generation AI: Develop Responsible Artificial Intelligence https://digichina.stanford.edu/work/translation-chinese-expert-group-offers-governance-principles-for-responsible-ai/

[10] https://digichina.stanford.edu/work/translation-guiding-opinions-on-strengthening-overall-governance-of-internet-information-service-algorithms/