



In its Frontier AI Trends Report (December 2025), the UK AI Security Institute (AISI) publishes its first consolidated public analysis of trends observed in frontier artificial intelligence systems, evaluated since November 2023 across domains considered critical to national security, public safety and economic stability.
This publication marks a significant step in the institutionalisation of AI safety and security governance, providing governments, companies and legal practitioners with a data-driven assessment of the rapid evolution of the most advanced AI models.
The report highlights a particularly rapid improvement in model performance, with certain capabilities doubling on average every eight months. In several sensitive domains, frontier AI systems are now surpassing established human expert baselines.
In the cyber domain, AI models are capable of autonomously completing tasks that were previously reserved for experienced practitioners. In 2025, the Institute tested the first model able to perform expert-level cyber tasks, typically requiring more than ten years of human experience. The length and complexity of tasks that models can complete without human assistance continue to increase at an exponential rate.
In chemistry and biology, frontier systems now exceed PhD-level expert performance on certain domain-specific tasks. Notably, models are capable of generating laboratory protocols assessed as accurate and feasible in wet-lab environments, as well as providing highly effective real-time troubleshooting support for complex experimental processes.
These findings confirm that frontier AI systems are moving beyond mere decision support towards increasing functional autonomy, raising novel legal questions relating to control, accountability and oversight.
The report further notes that model safeguards and safety mechanisms have improved. Certain recent systems require significantly more sophisticated and resource-intensive efforts to circumvent their protective measures than earlier generations.
However, a core conclusion remains: vulnerabilities persist across all tested systems. The Institute reports that it has identified exploitable weaknesses in every frontier model evaluated to date. No system can currently be regarded as fully robust against malicious or unintended misuse.
From a legal perspective, this observation is particularly significant. It raises fundamental issues concerning the duty of care and diligence incumbent upon AI developers, providers and deployers, as well as the allocation of civil and potentially regulatory liability in cases where foreseeable vulnerabilities result in harm.
The conclusions of the report resonate directly with ongoing developments in European and international AI regulation, including the implementation of the EU AI Act, emerging AI liability frameworks, and general product safety and compliance obligations.
The growing autonomy of frontier AI systems challenges traditional liability models based on the mere use of a tool. It calls for a systemic legal approach, encompassing:
In this respect, the AISI report contributes substantively to the debate on the need for robust legal governance of frontier AI, grounded in risk anticipation, transparency and traceability.
Beyond its technical findings, the report constitutes a strategic warning for companies deploying advanced AI systems in critical functions such as cybersecurity, research and development, strategic analysis, complex process automation or decision-making support.
It underscores that technological performance cannot be dissociated from:
Conclusion
The Frontier AI Trends Report 2025 issued by the UK AI Security Institute confirms a now unavoidable reality: frontier artificial intelligence is advancing faster than traditional control and accountability frameworks.
For companies and public authorities alike, the challenge is no longer merely to innovate, but to deploy frontier AI systems within a legally secure, responsible and strategically controlled framework. Frontier AI is no longer a purely technological matter; it has become a core issue of governance, liability and sovereignty.

