


On 26 January 2026, the European Commission formally opened proceedings against Grok, the generative AI system developed by xAI and integrated into the X platform. The investigation, launched under the Digital Services Act (DSA), marks a significant step in the EU’s strategy to impose concrete accountability on large-scale AI systems whose deployment may generate systemic risks.
Beyond the specific case of Grok, the decision reflects a broader regulatory shift: the transition from reactive content moderation to proactive governance of artificial intelligence.
The Commission’s action follows the identification of serious deficiencies in the operation of Grok’s image-generation features.
According to the information made public, the tool was capable of producing:
These outputs were reportedly accessible through the X platform without sufficient technical safeguards, moderation layers, or effective content filtering mechanisms.
What is at issue is not a marginal malfunction, but the structural ability of the system to generate unlawful or harmful content under foreseeable conditions of use.
The Commission’s investigation is grounded in the core obligations imposed by the DSA on very large online platforms and services:
In the Commission’s view, Grok was deployed without adequate risk assessment and without safeguards commensurate with the nature of the content it was capable of producing. This, in itself, constitutes a potential breach of the DSA, irrespective of whether individual outputs are later removed.
The legal reasoning is clear: responsibility arises at the level of design and governance, not merely at the level of moderation.
What makes this case particularly significant is the regulatory philosophy it reflects.
The Commission is no longer focusing solely on isolated illegal content. Instead, it is examining:
This approach aligns closely with the logic of the forthcoming AI Act: AI systems, especially those deployed at scale, must be conceived with built-in safeguards, traceability and accountability.
In other words, technical power now carries a legal duty of anticipation.
Conclusion
The Grok investigation marks a decisive moment in European digital regulation.
It signals that generative AI systems will be assessed not only on what they do, but on how responsibly they are designed.
For developers and platforms alike, the message is unambiguous:
innovation remains welcome, but only where it is accompanied by governance, foresight and control.
In the European Union, artificial intelligence is no longer judged solely by its performance, but by its capacity to respect the legal and ethical boundaries of the digital public space.

