On 24 September 2025, the United Nations Security Council held a high-level debate on “Artificial Intelligence and International Peace and Security”, convened on the margins of the General Assembly.
This debate marked a turning point: AI is no longer treated as a mere technological or economic innovation but as a decisive factor for the stability – or potential destabilization – of the international order.
In his opening statement, UN Secretary-General António Guterres warned:
“The window of opportunity to shape AI — in the service of peace, justice and humanity — is closing. We must act without delay.”
He recalled that the international community had, in the past, successfully addressed technologies with destabilizing potential, such as nuclear arms control and aviation safety, by establishing rules, creating institutions, and putting human dignity at the center.
Today, the question is no longer whether AI will affect international peace and security, but how that impact will be shaped.
The Secretary-General emphasized that AI is “already here”, rapidly transforming the global economy, the information space, and international relations.
When responsibly deployed, it can:
But he cautioned that “without guardrails, it can also become a weapon”, identifying several major risks:
The Secretary-General urged the international community to focus on four priorities:
He also recalled that in August 2025 the General Assembly created an Independent International Scientific Panel on AI and established an annual Global Dialogue on AI Governance, aimed at supporting sustained multilateral coordination.
Two leading scientists contributed their expertise:
He identified three major risks:
– the concentration of technological and economic power in the hands of a few actors,
– the malicious use of AI for sophisticated cyberattacks and disinformation campaigns,
– the misalignment and potential loss of human control over the most advanced systems.
“If AI capabilities continue to grow beyond human levels without scientific assurance that they are safe and aligned with our intentions, we may reach a point where AI operates irreversibly, beyond our control, thereby endangering all humanity.”
The debate underscored that AI governance is not merely technical but profoundly legal, raising issues under international law, humanitarian law, human rights, and accountability:
These challenges highlight the urgent need for international standards, transparency requirements, and independent oversight mechanisms, to ensure that AI remains under the effective control of human institutions.
The 24 September 2025 debate in the Security Council has placed AI at the heart of strategic global governance.
The risks are no longer hypothetical: they are already manifest in armed conflicts, cyberattacks, and disinformation campaigns.
The Secretary-General’s warning resonates as a call to immediate action:
“We must act without delay.”
It is now the responsibility of states, international organizations, and private actors to translate this call into clear, binding, and adaptive legal frameworks, capable of preserving international peace and security and ensuring that AI serves human dignity and the common good.
To access the official United Nations documentation related to this debate, click here.