


Article 50 of the European Union Artificial Intelligence Act (“AI Act”) establishes a novel and far-reaching transparency regime applicable to content generated or manipulated by artificial intelligence systems. Although drafted in relatively concise terms, Article 50 introduces a structural shift in the regulation of digital content, requiring that the artificial origin of certain outputs be identifiable, technically detectable and, in specific cases, explicitly disclosed to the public.
The Draft Code of Practice on Transparency of AI-Generated Content, recently published by the European Commission, represents a decisive step in the operationalisation of Article 50. While the Code does not create new binding obligations, it provides a detailed and structured interpretation of how the transparency requirements of the AI Act are expected to be implemented in practice. As such, it already constitutes a central reference point for market participants, supervisory authorities and, ultimately, courts.
The Draft Code adopts a clear and deliberate distinction between two categories of actors within the AI value chain: providers of generative AI systems, who bear responsibility for technical marking and detectability, and deployers of AI systems, who are responsible for transparency towards the public in relation to specific categories of content.
→ The full text of the Draft Code of Practice
One of the most significant contributions of the Draft Code lies in its clarification of the respective scopes of responsibility of providers and deployers. The transparency obligations set out in Article 50 do not apply uniformly to all actors, nor do they concern the same categories of content.
With regard to providers of generative AI systems, the Draft Code adopts a broad, technology-driven approach. The decisive criterion is the artificial origin of the content: where an output is generated or materially manipulated by an AI system, it falls within the scope of the obligations relating to marking and detectability, regardless of its intended use, its audience or whether it is publicly disseminated. Images, audio, video, text, multimodal outputs and even purely internal or technical content are therefore treated in a uniform manner. This approach reflects the overarching objective of ensuring end-to-end traceability of AI-generated content throughout its lifecycle.
By contrast, the obligations applicable to deployers are grounded in a contextual and risk-based logic. They apply only where the dissemination of AI-generated or AI-manipulated content may mislead the public as to its authenticity. The Draft Code therefore limits mandatory disclosure obligations to deepfake content, namely audio, visual or audiovisual material that realistically depicts persons, objects, places or events, and to certain categories of textual content, but only where such text is published for the purpose of informing the public on matters of public interest and has not been subject to effective human editorial control. This asymmetry is not incidental; it reflects the core regulatory logic of Article 50 itself.
When addressing providers of generative AI systems, the Draft Code embraces a resolutely technical conception of transparency. It is premised on the widely acknowledged observation that no single marking technique is currently capable of satisfying, on its own, the requirements of effectiveness, robustness, reliability and interoperability laid down by the AI Act. The Draft Code therefore promotes a layered approach combining multiple complementary techniques.
Providers are expected, where technically feasible, to embed provenance information within the metadata of generated content, secured by digital signatures ensuring integrity and authenticity. These measures should be complemented by imperceptible watermarking techniques integrated directly into the content itself, designed to withstand common transformations and adversarial manipulation. Where such techniques prove insufficient, particularly for certain text-based or hybrid outputs, the Draft Code expressly contemplates the use of additional measures such as logging mechanisms or digital fingerprinting.
Crucially, the Draft Code emphasises that transparency should be integrated at the earliest possible stage of system design. Providers of generative models, especially where those models are intended for downstream integration by third parties, are encouraged to implement marking mechanisms at model level in order to facilitate compliance across the value chain. This transparency-by-design approach is one of the defining features of the Draft Code.
Technical traceability would, however, be of limited value in the absence of effective verification capabilities. Accordingly, the Draft Code recommends that providers make available, free of charge, tools enabling third parties to detect whether content originates from their AI systems. Detectability is conceived as a durable obligation, extending throughout the lifecycle of the system and, where necessary, ensured through cooperation with competent authorities in the event of market exit.
A different perspective governs the Draft Code’s approach to deployers. Here, the focus shifts from technical traceability to intelligible disclosure addressed to natural persons. Deployers are expected to implement internal processes enabling them to identify the degree of AI involvement in the content they disseminate, distinguishing in particular between fully AI-generated outputs and content that is substantially AI-assisted.
On this basis, the Draft Code recommends the use of a simple and immediately perceptible visual indicator signalling AI involvement from the moment of first exposure. Pending the development of a harmonised EU-wide solution, this indicator is envisaged as a linguistic acronym adaptable to Member State languages. In the longer term, it is intended to evolve into an interactive mechanism allowing users to access more detailed information regarding the nature and extent of AI intervention.
The Draft Code is notable for the precision with which it addresses different dissemination formats. Distinct guidance is provided for real-time video, prerecorded audiovisual content, audio-only formats, images and text publications, with the objective of ensuring effective disclosure without imposing disproportionate constraints. Particular attention is paid to artistic, creative, satirical and fictional works, for which transparency obligations must be implemented in a manner compatible with freedom of expression and artistic creation.
Disclosure is not conceived as a one-off act. The Draft Code encourages deployers to structure their compliance through internal policies, staff training, reporting channels for incorrect or missing disclosures, and prompt corrective mechanisms. Transparency thus becomes an element of ongoing governance rather than a purely formal requirement.
The Draft Code of Practice currently published is explicitly identified as a first draft. Preparatory work was initiated in autumn 2025 following a broad multi-stakeholder consultation, and the text has been opened to comments until 23 January 2026. A revised version is expected in the course of 2026, in close coordination with the European Commission’s forthcoming guidelines on Article 50 of the AI Act.
From a strictly legal standpoint, it is essential to recall that, although the AI Act entered into force on 1 August 2024, Article 50 is subject to a transitional period. The transparency obligations laid down therein, including those applicable to deployers, will become fully enforceable as from 2 August 2026. From that date onwards, supervisory authorities will be entitled to assess compliance with, and enforce, the disclosure obligations set out in Article 50, with reference to the standards articulated in the Code.
The Draft Code of Practice on Transparency of AI-Generated Content should not be viewed as a purely programmatic document. Through the level of detail of its technical and organisational recommendations, its clear allocation of responsibilities between providers and deployers, and its explicit articulation with the regulatory timeline of the AI Act, it already constitutes a structuring compliance framework for the AI ecosystem.
Although the enforceability date of Article 50 may still appear distant, the substance of the Draft Code offers a particularly clear indication of how European authorities envisage compliance in practice. For companies active in the AI value chain, the strategic challenge is therefore not to react to the entry into application of Article 50, but to anticipate, well in advance, the implementation of robust, legally sound and future-proof transparency mechanisms.

