Téléchargez gratuitement notre eBook "Pour une stratégie d'entreprise éco-responsable"
télécharger
French
French
Formation juridique
Propriété intellectuelle : formez vos équipes au-delà de la conformité
Stratégie PI, preuve d’antériorité, secrets d’affaires, outils de valorisation : une formation sur-mesure animée par nos avocats.
En savoir plus
Formation juridique
Intelligence Artificielle : maîtriser vos risques juridiques & anticiper l’IA Act
Découvrez notre formation sur les risques et obligations liés à l’intelligence artificielle
En savoir plus
Actualité
16/1/26

Frontier Artificial Intelligence: What the UK AI Security Institute 2025 Report Reveals about Risk, Safety and Legal Responsibility

In its Frontier AI Trends Report (December 2025), the UK AI Security Institute (AISI) publishes its first consolidated public analysis of trends observed in frontier artificial intelligence systems, evaluated since November 2023 across domains considered critical to national security, public safety and economic stability.  

This publication marks a significant step in the institutionalisation of AI safety and security governance, providing governments, companies and legal practitioners with a data-driven assessment of the rapid evolution of the most advanced AI models.

1. An unprecedented acceleration of frontier AI capabilities

The report highlights a particularly rapid improvement in model performance, with certain capabilities doubling on average every eight months. In several sensitive domains, frontier AI systems are now surpassing established human expert baselines.

In the cyber domain, AI models are capable of autonomously completing tasks that were previously reserved for experienced practitioners. In 2025, the Institute tested the first model able to perform expert-level cyber tasks, typically requiring more than ten years of human experience. The length and complexity of tasks that models can complete without human assistance continue to increase at an exponential rate.

In chemistry and biology, frontier systems now exceed PhD-level expert performance on certain domain-specific tasks. Notably, models are capable of generating laboratory protocols assessed as accurate and feasible in wet-lab environments, as well as providing highly effective real-time troubleshooting support for complex experimental processes.

These findings confirm that frontier AI systems are moving beyond mere decision support towards increasing functional autonomy, raising novel legal questions relating to control, accountability and oversight.

2. Persistent vulnerabilities despite strengthened model safeguards

The report further notes that model safeguards and safety mechanisms have improved. Certain recent systems require significantly more sophisticated and resource-intensive efforts to circumvent their protective measures than earlier generations.

However, a core conclusion remains: vulnerabilities persist across all tested systems. The Institute reports that it has identified exploitable weaknesses in every frontier model evaluated to date. No system can currently be regarded as fully robust against malicious or unintended misuse.

From a legal perspective, this observation is particularly significant. It raises fundamental issues concerning the duty of care and diligence incumbent upon AI developers, providers and deployers, as well as the allocation of civil and potentially regulatory liability in cases where foreseeable vulnerabilities result in harm.

3. Legal implications: liability, compliance and governance of frontier AI

The conclusions of the report resonate directly with ongoing developments in European and international AI regulation, including the implementation of the EU AI Act, emerging AI liability frameworks, and general product safety and compliance obligations.

The growing autonomy of frontier AI systems challenges traditional liability models based on the mere use of a tool. It calls for a systemic legal approach, encompassing:

  • model design and architecture,
  • safety and security choices,
  • contractual allocation of risk,
  • conditions of deployment by corporate users,
  • and the effectiveness of human oversight mechanisms.

In this respect, the AISI report contributes substantively to the debate on the need for robust legal governance of frontier AI, grounded in risk anticipation, transparency and traceability.

4. A strategic warning for companies and public authorities

Beyond its technical findings, the report constitutes a strategic warning for companies deploying advanced AI systems in critical functions such as cybersecurity, research and development, strategic analysis, complex process automation or decision-making support.

It underscores that technological performance cannot be dissociated from:

  • a thorough legal risk assessment,
  • appropriate contractual safeguards,
  • and a broader reflection on the economic, ethical and societal impacts of frontier AI technologies.

Conclusion

The Frontier AI Trends Report 2025 issued by the UK AI Security Institute confirms a now unavoidable reality: frontier artificial intelligence is advancing faster than traditional control and accountability frameworks.

For companies and public authorities alike, the challenge is no longer merely to innovate, but to deploy frontier AI systems within a legally secure, responsible and strategically controlled framework. Frontier AI is no longer a purely technological matter; it has become a core issue of governance, liability and sovereignty.

Vincent FAUCHOUX
Formation juridique
Propriété intellectuelle : formez vos équipes au-delà de la conformité
Stratégie PI, preuve d’antériorité, secrets d’affaires, outils de valorisation : une formation sur-mesure animée par nos avocats.
En savoir plus
Formation juridique
Intelligence Artificielle : maîtriser vos risques juridiques & anticiper l’IA Act
Découvrez notre formation sur les risques et obligations liés à l’intelligence artificielle
En savoir plus

Abonnez vous à notre Newsletter

Recevez chaque mois la lettre du DDG Lab sur l’actualité juridique du moment : retrouvez nos dernières brèves, vidéos, webinars et dossiers spéciaux.
je m'abonne
DDG utilise des cookies dans le but de vous proposer des services fonctionnels, dans le respect de notre politique de confidentialité et notre gestion des cookies (en savoir plus). Si vous acceptez les cookies, cliquer ici.