When pop icons like Dua Lipa, Sir Paul McCartney, and Sir Elton John take a public stance on a parliamentary bill, attention inevitably follows. Their concern? That the UK was preparing to allow tech firms to train artificial intelligence systems on vast quantities of creative content (lyrics, melodies, and writings) without the consent of their original authors. Sir Elton John did not mince words when he declared such practices to be “theft, thievery on a high scale.” These criticisms, amplified by their supporters in the House of Lords, became a powerful symbol of the cultural and economic stakes behind a bill otherwise described in bureaucratic terms: the Data (Use and Access) Bill.
Adopted on 11 June 2025, the UK Parliament passed the Data (Use and Access) Bill, now set to become the Data (Use and Access) Act 2025 upon Royal Assent. While the statute includes a wide array of amendments to the UK GDPR and the Privacy and Electronic Communications Regulations (PECR), its most politically and legally contentious provisions relate to the use of copyrighted content for the training of artificial intelligence (AI) models.
The Act does not directly prohibit the use of copyright-protected materials in AI training, nor does it impose immediate transparency obligations. However, after sustained debates (and multiple rounds of amendments proposed by the House of Lords) the legislation includes a commitment by the Government to publish a report on the use of copyright-protected works in AI development within nine months of the Act’s entry into force, with an interim progress report within six months.
These obligations are set out in Clause 134, introduced via Commons Amendments 45 to 49, which provide that the Secretary of State must assess:
“ways of enforcing requirements and restrictions relating to
(i) the use of copyright works to develop AI systems, and
(ii) the accessing of copyright works for that purpose (for example, by web crawlers),
including enforcement by a regulator.”
This formulation notably places the focus on two key aspects: use and access. The reference to “web crawlers” as an example of data access tools makes clear that the law targets typical scraping practices used to collect large-scale textual or visual datasets for AI training.
Furthermore, the Act imposes a legal obligation on the Government to evaluate whether UK copyright holders are sufficiently equipped to detect and prevent infringement — including infringement committed outside the UK by foreign AI developers. In this respect, Clause 49R (amending Clause 46) reads:
“The consideration and proposals […] must include consideration of, and proposals relating to, AI systems developed outside the United Kingdom.”
This extraterritorial focus reflects growing concerns about the global nature of model training pipelines, and a desire to level the playing field for domestic rightsholders.
Although the Act itself does not codify a formal opt-out mechanism, its drafting and the political debates that accompanied it suggest that the Government leans toward an approach akin to that of Article 4(3) of the EU Directive 2019/790 (DSM Directive). Under that regime, rightholders can opt out of the general TDM exception by reserving their rights in a machine-readable manner.
In the UK context, the Government has, to date, refused to embed a binding transparency or opt-out obligation within the Act itself, on the grounds that it would be premature to legislate before the results of its ongoing copyright and AI consultation. Nonetheless, the inclusion of a mandatory report and follow-up legislative proposals indicates that a regulatory solution, possibly modelled on the EU’s TDM opt-out, is under consideration.
Indeed, successive Lords Amendments (including Amendments 49B, 49D, and 49F) sought to introduce detailed obligations on AI operators to:
While all of these amendments were ultimately rejected by the Commons, the final compromise includes a commitment to produce:
“a draft Bill containing legislative proposals to provide transparency to copyright owners regarding the use of their copyright works as data inputs for AI models made available by relevant traders” (Clause 49F(2)).
This draft Bill must be laid before the relevant Parliamentary Committees for pre-legislative scrutiny and is expected to set out:
This architecture mirrors the structure of the EU’s DSM Directive, with an additional emphasis on regulatory enforcement and extraterritorial application. It can therefore be expected that any UK opt-out mechanism will need to combine transparency obligations (disclosure of training inputs), identification tools (to allow rightsholders to exercise rights), and opt-out infrastructure (where rightholders clearly express their reservation).
The legal response to AI training practices has drawn sharp criticism from parts of the creative sector. High-profile artists such as Sir Elton John have condemned what they see as unchecked appropriation of artistic labour. In his words, allowing AI firms to freely train on UK content would amount to:
“committing theft, thievery on a high scale.”
This position was echoed in the Lords, who unsuccessfully attempted to impose direct constraints on text and data mining, especially where it implicates artistic or journalistic works.
At the same time, the UK Government has refrained from choosing a clear policy direction. It maintains that the complexity of the issue — particularly when it comes to enforcement against foreign actors — justifies a cautious, staged approach. The Getty Images v Stability AI litigation, currently before the UK High Court, will likely influence the future development of both statutory and case-law guidance on AI training practices.
Conclusion
The Data (Use and Access) Act 2025 does not impose immediate legal obligations on AI developers to secure consent from rightsholders, nor does it provide a ready-made transparency registry. But it signals a regulatory trajectory that strongly suggests the eventual emergence of a formal opt-out system — one likely to be shaped by the interplay of legislative follow-up, litigation, and international alignment.
The law therefore marks an important inflection point in UK data and copyright law, and particularly in the future governance of AI. Organisations involved in AI development should begin preparing for enhanced disclosure duties and compliance expectations, even before secondary legislation comes into effect.
This article is intended for general informational purposes. As a French-qualified lawyer, I do not provide legal advice under English law. Readers seeking operational guidance under the Data (Use and Access) Act 2025 should consult a qualified solicitor or barrister in the United Kingdom.
Image par Justin Higuchi sur Flickr