The global promotional campaign for Taylor Swift’s twelfth album, The Life of a Showgirl, turned into an unexpected legal case study on transparency in the use of artificial intelligence.
In collaboration with Google, the singer launched an online scavenger hunt in which fans could unlock twelve cryptic videos leading to the final clip, The Fate of Ophelia.
Soon after their release, many viewers noticed that the videos — surreal landscapes, unnaturally smooth textures, slightly robotic movements — appeared to be AI-generated, yet no disclaimer was provided.
According to an article published by TechCrunch on 6 October 2025, some observers speculated that Google’s own Veo 3 video generation model had been used to produce the visuals, though neither the company nor the artist confirmed it.
This incident has reignited a crucial legal question: what disclosure and labelling obligations apply when content is created or enhanced using artificial intelligence?
The European Union Artificial Intelligence Act (Regulation (EU) 2024/1689), adopted in spring 2024, establishes in Article 50 a general duty to identify AI-generated or AI-manipulated content whenever it is likely to mislead the public.
However, the same provision includes a significant exemption for content that is manifestly artistic, creative, satirical or fictional, provided that its context does not create a risk of confusion.
This contextual approach reflects Europe’s attempt to balance transparency with freedom of artistic expression.
Applied to the Taylor Swift campaign, this framework could allow such videos to be disseminated without explicit AI labelling — so long as their artistic and promotional nature is clear to the average viewer.
China has adopted a far more stringent model.
Under the Interim Measures for the Management of Generative AI Services, adopted on 15 August 2025 and effective since 15 September 2025, all AI-generated content — images, videos, text or sound — must include a visible, indelible and technically verifiable label identifying its artificial nature.
Service providers must ensure technical traceability (through watermarking or embedded metadata) and bear direct legal liability for any omission.
No exception exists for artistic or promotional works: transparency is absolute, viewed as a condition of information security and public trust.
By contrast, the United States has no federal legislation specifically requiring labelling of AI-generated content.
Certain states are developing initiatives, such as the California AI Transparency in Media Act, which would mandate explicit disclosure for synthetic content used in political or advertising contexts.
Yet, the overall framework remains fragmented, creating uncertainty for platforms and creators operating across multiple jurisdictions.
The Swift-Google controversy highlights a growing tension between creative innovation and the duty of loyalty to the public.
Even when artistic intent is evident, the absence of a clear disclosure regarding AI use may constitute:
The issue thus extends beyond aesthetics — it raises questions of truthfulness, consumer protection and ethical responsibility in digital communication.
Conclusion
Europe opts for contextual transparency, China enforces absolute transparency, and the United States remains regulatory neutral.
The Taylor Swift case shows that AI-content labelling is no longer a technical issue but a global compliance concern, shaping how creativity, technology and truth will coexist in the public sphere.
This article was prepared by a French attorney. References to Chinese and U.S. law are provided for information purposes only and do not constitute legal advice. For expert analysis of these jurisdictions, local counsel should be consulted.