


A French Legal Perspective
The recent controversy surrounding Sora 2, OpenAI’s generative video platform, provides a striking example of how rapidly the boundaries of image rights are being tested by artificial intelligence.
Launched globally in October 2025, Sora 2 was presented as the first application combining AI-generated video with social-media features, allowing users to create synthetic clips featuring the faces and voices of real individuals, provided they had given consent.
Within weeks, however, several unauthorized deepfakes of public figures began circulating online, prompting OpenAI to react. On 21 October 2025, CEO Sam Altman announced tighter guardrails: the company moved from a passive “opt-out” system to a strict “opt-in” consent regime for any use of a person’s likeness or voice. Altman stated that OpenAI was “deeply committed to protecting performers from the misappropriation of their voice and likeness” and publicly supported the proposed NO FAKES Act in the United States, aimed at prohibiting digital impersonations without consent.
Following public complaints, OpenAI also suspended several controversial clips depicting Martin Luther King Jr. and Robin Williams, after their families requested their removal as “deeply disrespectful.”
In France, this global debate soon took a local turn. At the end of October 2025, leading French YouTuber Tibo InShape (Thibaud Delapart) agreed to lend his image and voice to Sora 2 as part of an experimental collaboration. Within days, the experience spiralled out of control. By 28 October, hundreds of AI-generated videos featuring his digital double flooded TikTok and X, attributing to him statements he had never made. While some parodies were playful, others conveyed racist or misogynistic messages, triggering public outrage.
On 31 October, Le Monde reported “a wave of racist AI-generated videos featuring Tibo InShape,” illustrating how fragile the boundary between reality and simulation has become.
Under French law, authorization to use one’s image must always be limited, specific, and contextualized. The right to one’s image, protected by Article 9 of the French Civil Code, requires explicit consent regarding the purpose, duration, and context of any use. Granting such consent on a platform like Sora 2 does not amount to a general waiver of control. French courts interpret consent strictly: it applies only to the specific uses agreed upon. Any reuse that exceeds that scope — for example, the creation of hateful, misleading, or degrading content — remains unlawful and may give rise to civil or criminal liability, including for defamation, insult, or incitement to hatred.
The issue is not confined to France. In the United States, actors such as Bryan Cranston and the SAG-AFTRA union have publicly challenged OpenAI over unauthorized Sora 2 recreations, while Zelda Williams (daughter of Robin Williams) and Bernice King (daughter of Martin Luther King Jr.) condemned deepfake videos of their late relatives that circulated on the same platform. Across jurisdictions, a common principle emerges: even when consent is initially granted, it cannot be construed as a blanket renunciation of personality rights.
In practice, an influencer’s authorization merely creates a contractual framework with the platform — it does not erase the individual’s inherent right to oppose distorted, misleading, or harmful uses of their image.
When such AI-generated content crosses the line, civil action remains possible for violation of image rights or human dignity, enabling urgent injunctions, takedown orders, and the award of damages. Criminal proceedings may also be brought for defamation, insult, or incitement to hatred, as the synthetic nature of the content provides no exemption from liability. Platforms, for their part, are bound by the EU Digital Services Act, applicable in France since February 2024, which obliges them to respond swiftly to notices of illegal content. And where an influencer’s name or pseudonym is registered as a trademark, further claims may be pursued for trademark infringement, unfair competition, or parasitism.
These mechanisms provide partial but essential means of restoring control over one’s image — provided swift action is taken and evidence is preserved. Yet the episode also highlights the need for clearer regulation of “consented digital avatars,” before artistic experimentation with AI turns into a vast laboratory of uncontrolled identity replication.

