Téléchargez gratuitement notre eBook "Pour une stratégie d'entreprise éco-responsable"
télécharger
French
French
Formation juridique
Propriété intellectuelle : formez vos équipes au-delà de la conformité
Stratégie PI, preuve d’antériorité, secrets d’affaires, outils de valorisation : une formation sur-mesure animée par nos avocats.
En savoir plus
Formation juridique
Intelligence Artificielle : maîtriser vos risques juridiques & anticiper l’IA Act
Découvrez notre formation sur les risques et obligations liés à l’intelligence artificielle
En savoir plus
Actualité
24/10/25

The “Statement on Superintelligence” by the Future of Life Institute (October 2025): Toward a Conditional Ban on Superintelligence Development

On 22 October 2025, the Future of Life Institute (FLI) released a text barely thirty words long — yet its implications are profound, both politically and ethically:

“We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”

With this statement, the FLI and several thousand signatories — including Geoffrey Hinton, Yoshua Bengio, Steve Wozniak, Richard Branson, Mary Robinson, Meghan Markle, and Steve Bannon — call for a prohibition on the development of superintelligence until two cumulative conditions are met:

  1. a broad scientific consensus ensuring that such development can be carried out safely and controllably; and
  2. a strong public buy-in, expressing genuine democratic consent.

A Paradigm Shift in AI Governance

Unlike the March 2023 open letter that called for a six-month pause in the training of advanced AI systems, the Statement on Superintelligence advocates a conditional ban — not a temporary moratorium, but a global freeze on the pursuit of superintelligence until both scientific safety and public legitimacy are established.

The FLI defines superintelligence as systems capable of surpassing human intelligence across most cognitive domains. The risks identified extend far beyond issues of bias or misinformation: loss of human control, large-scale social manipulation, economic dislocation, and, in the most extreme view, an existential threat to humanity itself.

Conditions for Lifting the Ban

The statement does not specify who would be competent to assess these conditions, nor how scientific consensus or public approval would be determined.

From a legal standpoint, the use of the term “prohibition” implies a binding legal interdiction, rather than a mere ethical recommendation. It suggests the creation of a normative instrument, at national or international level, grounded in the principles of precaution and collective security.

Implementing such an approach would require:

  • the establishment of an independent scientific body responsible for verifying the safety and controllability of advanced AI systems;
  • a democratic mechanism for public validation, through citizen consultation or legislative mandate; and
  • the creation of a “compute” control framework, monitoring the access to and use of computing power and infrastructure enabling the training of the most powerful models.

A More Radical Position than Current European Law

The Statement on Superintelligence goes far beyond the scope of the EU Artificial Intelligence Act (AI Act), which relies on a risk-based framework and does not envisage any general prohibition on the development of “strong” AI.

While the AI Act imposes transparency, traceability, and risk-management duties for general-purpose systems, the FLI calls for a pre-emptive ban — a suspension of development until both scientific and societal validation have been achieved.

Toward a Democratic Suspension of Technological Progress

This concise but powerful text introduces a novel legal and philosophical concept: that of a democratic suspension of technological progress, pending societal assurance of safety and control.

Such an approach echoes existing international moratoria on sensitive technologies — nuclear research, biotechnology, or chemical weapons — where scientific advancement remains legitimate but its application is subject to collective oversight.

The Statement on Superintelligence thus opens a new chapter in the global debate: the emergence of a legal and ethical framework capable of pausing, in the name of prudence, one of humanity’s most transformative trajectories.

Vincent FAUCHOUX
Image par Freepik
Formation juridique
Propriété intellectuelle : formez vos équipes au-delà de la conformité
Stratégie PI, preuve d’antériorité, secrets d’affaires, outils de valorisation : une formation sur-mesure animée par nos avocats.
En savoir plus
Formation juridique
Intelligence Artificielle : maîtriser vos risques juridiques & anticiper l’IA Act
Découvrez notre formation sur les risques et obligations liés à l’intelligence artificielle
En savoir plus

Abonnez vous à notre Newsletter

Recevez chaque mois la lettre du DDG Lab sur l’actualité juridique du moment : retrouvez nos dernières brèves, vidéos, webinars et dossiers spéciaux.
je m'abonne
DDG utilise des cookies dans le but de vous proposer des services fonctionnels, dans le respect de notre politique de confidentialité et notre gestion des cookies (en savoir plus). Si vous acceptez les cookies, cliquer ici.