Téléchargez gratuitement notre eBook "Pour une stratégie d'entreprise éco-responsable"
télécharger
French
French
Formation juridique
Propriété intellectuelle : formez vos équipes au-delà de la conformité
Stratégie PI, preuve d’antériorité, secrets d’affaires, outils de valorisation : une formation sur-mesure animée par nos avocats.
En savoir plus
Formation juridique
Intelligence Artificielle : maîtriser vos risques juridiques & anticiper l’IA Act
Découvrez notre formation sur les risques et obligations liés à l’intelligence artificielle
En savoir plus
Actualité
29/9/25

The United Nations Warns on Artificial Intelligence: A Defining Challenge for International Peace and Security

On 24 September 2025, the United Nations Security Council held a high-level debate on “Artificial Intelligence and International Peace and Security”, convened on the margins of the General Assembly.

This debate marked a turning point: AI is no longer treated as a mere technological or economic innovation but as a decisive factor for the stability – or potential destabilization – of the international order.

1. “The Window of Opportunity Is Closing”

In his opening statement, UN Secretary-General António Guterres warned:

“The window of opportunity to shape AI — in the service of peace, justice and humanity — is closing. We must act without delay.”

He recalled that the international community had, in the past, successfully addressed technologies with destabilizing potential, such as nuclear arms control and aviation safety, by establishing rules, creating institutions, and putting human dignity at the center.

Today, the question is no longer whether AI will affect international peace and security, but how that impact will be shaped.

2. A Double-Edged Instrument: Prevention or Destabilization

The Secretary-General emphasized that AI is “already here”, rapidly transforming the global economy, the information space, and international relations.

When responsibly deployed, it can:

  • anticipate food insecurity and population displacement,
  • facilitate humanitarian de-mining operations,
  • enable early detection of rising violence,
  • reinforce the protection of civilians.

But he cautioned that “without guardrails, it can also become a weapon”, identifying several major risks:

  • the use of autonomous targeting systems in armed conflicts,
  • AI-enabled cyberattacks capable of disrupting critical infrastructure within minutes,
  • the fabrication and dissemination of synthetic audio, video and images, threatening the integrity of information and public trust,
  • the growing pressure on energy, water and critical minerals required for large-scale AI models, which could fuel new geopolitical tensions.

3. Four Priorities for Global AI Governance

The Secretary-General urged the international community to focus on four priorities:

  1. Preserve meaningful human control over the use of force, ensuring that lethal decisions are never delegated to autonomous systems.
  2. Establish coherent global regulatory frameworks, to avoid fragmented norms and legal loopholes that could enable abuse.
  3. Safeguard the integrity of information in situations of conflict and insecurity, countering disinformation and algorithmic manipulation.
  4. Close the global capability gap in AI, so that innovation does not widen inequalities or generate new forms of instability.

He also recalled that in August 2025 the General Assembly created an Independent International Scientific Panel on AI and established an annual Global Dialogue on AI Governance, aimed at supporting sustained multilateral coordination.

4. Scientific Warnings: Mastery and Accountability

Two leading scientists contributed their expertise:

  • Yoshua Bengio, Professor at the University of Montreal, warned that if current trends continue, some AI systems could surpass human capabilities in most cognitive tasks within as little as five years.

He identified three major risks:

– the concentration of technological and economic power in the hands of a few actors,
– the malicious use of AI for sophisticated cyberattacks and disinformation campaigns,
– the misalignment and potential loss of human control over the most advanced systems.

“If AI capabilities continue to grow beyond human levels without scientific assurance that they are safe and aligned with our intentions, we may reach a point where AI operates irreversibly, beyond our control, thereby endangering all humanity.”

  • Yejin Choi, Professor at the University of Washington and Senior Researcher at the Stanford Institute for Human-Centered AI, called for a “bold collective investment” in high-risk, high-reward scientific research and for the creation of open and shared infrastructures, as well as expanded capacity-building so that AI development and benefits are not concentrated in the hands of a few states or corporations.

5. Legal and Normative Challenges

The debate underscored that AI governance is not merely technical but profoundly legal, raising issues under international law, humanitarian law, human rights, and accountability:

  • Compatibility with international humanitarian law (IHL): how to ensure compliance with the principles of distinction and proportionality when autonomous systems take lethal decisions? Who bears legal responsibility for violations of the laws of armed conflict?
  • Protection of fundamental rights and the integrity of information: AI-driven manipulation of content threatens freedom of expression and the right to reliable information — both essential to maintaining peace and security.
  • Traceability and accountability: the technical opacity of advanced AI models makes it difficult to attribute responsibility for harm, whether in civil liability, criminal accountability, or state responsibility for breaches of international obligations.

These challenges highlight the urgent need for international standards, transparency requirements, and independent oversight mechanisms, to ensure that AI remains under the effective control of human institutions.

Conclusion: A Call for Swift Collective Legal Action

The 24 September 2025 debate in the Security Council has placed AI at the heart of strategic global governance.

The risks are no longer hypothetical: they are already manifest in armed conflicts, cyberattacks, and disinformation campaigns.

The Secretary-General’s warning resonates as a call to immediate action:

“We must act without delay.”

It is now the responsibility of states, international organizations, and private actors to translate this call into clear, binding, and adaptive legal frameworks, capable of preserving international peace and security and ensuring that AI serves human dignity and the common good.

To access the official United Nations documentation related to this debate, click here.

Vincent FAUCHOUX
Image par Canva
Formation juridique
Propriété intellectuelle : formez vos équipes au-delà de la conformité
Stratégie PI, preuve d’antériorité, secrets d’affaires, outils de valorisation : une formation sur-mesure animée par nos avocats.
En savoir plus
Formation juridique
Intelligence Artificielle : maîtriser vos risques juridiques & anticiper l’IA Act
Découvrez notre formation sur les risques et obligations liés à l’intelligence artificielle
En savoir plus

Abonnez vous à notre Newsletter

Recevez chaque mois la lettre du DDG Lab sur l’actualité juridique du moment : retrouvez nos dernières brèves, vidéos, webinars et dossiers spéciaux.
je m'abonne
DDG utilise des cookies dans le but de vous proposer des services fonctionnels, dans le respect de notre politique de confidentialité et notre gestion des cookies (en savoir plus). Si vous acceptez les cookies, cliquer ici.