Téléchargez gratuitement notre eBook "Pour une stratégie d'entreprise éco-responsable"
télécharger
French
French
Formation juridique
Propriété intellectuelle : formez vos équipes au-delà de la conformité
Stratégie PI, preuve d’antériorité, secrets d’affaires, outils de valorisation : une formation sur-mesure animée par nos avocats.
En savoir plus
Formation juridique
Intelligence Artificielle : maîtriser vos risques juridiques & anticiper l’IA Act
Découvrez notre formation sur les risques et obligations liés à l’intelligence artificielle
En savoir plus
Actualité
1/10/25

California’s SB-53: A New Global Benchmark for Transparency and Safety in “Frontier AI” Models

California’s Legislative Response to the Risks of Advanced AI

On 29 September 2025, the State of California enacted the SB-53 – Transparency in Frontier Artificial Intelligence Act, introduced by Senator Scott Wiener and signed into law by the Governor of California.

This statute is the first comprehensive legislative framework in the United States specifically dedicated to “frontier AI” models — that is, the most advanced artificial intelligence systems, trained on massive datasets with exceptionally high computational power and capable of being adapted to a broad range of tasks.

The law pursues a dual objective: to strengthen transparency and accountability of AI developers and to prevent severe risks to public safety.

These risks, identified in the Governor’s Report on frontier AI models and during legislative hearings, include, among others, the potential loss of human control over powerful systems, their misuse for malicious purposes (such as cyberattacks or the development of biological or chemical weapons), and their capacity to circumvent human oversight.

SB-53 thus represents a shift from a largely voluntary self-regulatory approach to a binding legal framework, imposing clear obligations of governance and disclosure on the most significant AI actors.

1. Heightened Transparency and Risk-Management Obligations for Major “Frontier AI” Developers

SB-53 requires major developers of “frontier AI” models — particularly those exceeding a high annual revenue threshold and operating at large computational scales — to adopt and publicly disclose a comprehensive internal governance framework, referred to as a “frontier AI framework.”

This framework must specify, among other things:

  • the methods used to assess the potentially dangerous capabilities of their models and the strategies adopted to mitigate those risks;
  • the incorporation of international standards and sectoral best practices;
  • the procedures for pre-deployment review and testing before any large-scale release or internal use;
  • the measures to ensure robust security of critical model parameters against unauthorized access or malicious exfiltration;
  • the mechanisms for prompt detection and remediation of severe incidents, including unanticipated autonomous behavior or loss of control.

Before deploying a new or substantially modified model, developers must also publish a transparency report describing the identified risks, the mitigation measures adopted, and, where applicable, the involvement of independent external evaluators.

Where sensitive information must be protected for security or trade-secret reasons, developers must explain the nature of the redactions and retain the full information for potential review by the authorities.

The law further establishes a duty to promptly report severe incidents. For example, an unauthorized disclosure of critical model parameters or a loss of control leading to physical harm must be reported without delay to the competent public authority.

Failure to publish governance policies, submit transparency reports, or report severe incidents may give rise to significant civil penalties, which can reach USD 1 million per violation.

2. Public-Sector Governance and Enhanced Protection for Whistle-blowers

Beyond obligations imposed on private companies, the legislature has tasked the Californian administration with assessing the creation of a public high-performance computing infrastructure, known as “CalCompute.”

This initiative seeks to promote more equitable access to computational resources essential for the development of AI and to support projects in “frontier AI” that are ethical, sustainable, and in the public interest.

The law also strengthens the protection of employees acting as safety whistle-blowers.

It prohibits any contractual provision or corporate practice aimed at restricting or penalising good-faith reporting of activities that may pose a substantial risk to public health or safety.

Major developers must implement an internal, anonymous reporting channel, monitor the follow-up of alerts, and clearly inform employees of their rights.

In cases of retaliation, employees may seek injunctive relief and reimbursement of legal costs through the courts.

3. A U.S. Precedent with International Significance

SB-53 embodies a forward-looking approach to the governance of systemic risks posed by “frontier AI”, complementing existing data-protection and algorithmic-transparency rules.

It is of particular relevance to European businesses and practitioners for two reasons:

  • it foreshadows a transatlantic dialogue on safety standards for “frontier AI” models, likely to interact with the EU Artificial Intelligence Act (AI Act);
  • it highlights the need for all developers or deployers of such models to establish robust internal organisational and contractual mechanisms capable of addressing comparable obligations of disclosure, reporting, and risk governance.

Although a Californian statute, some of SB-53’s obligations may have extraterritorial effects on international developers providing models to users located in California.

The law thus serves as a reference point for policymakers worldwide who seek to close the gap between the speed of technological innovation and the capacity of the law to prevent significant AI-related harms.

Conclusion

By combining mandatory transparency, systematic risk-management duties, and protection for employee whistle-blowers, SB-53 imposes an unprecedented level of governance on the developers of the most advanced “frontier AI” models.

It represents a significant step towards bridging the gap between rapid innovation and the legal safeguards required to protect society, and it signals the emergence of international standards for AI safety.

This article is written by a French attorney and does not constitute legal advice on U.S. law. Any analysis or strategic decision involving the application of SB-53 should be undertaken with the assistance of a qualified U.S. attorney.

Vincent FAUHOUX
Formation juridique
Propriété intellectuelle : formez vos équipes au-delà de la conformité
Stratégie PI, preuve d’antériorité, secrets d’affaires, outils de valorisation : une formation sur-mesure animée par nos avocats.
En savoir plus
Formation juridique
Intelligence Artificielle : maîtriser vos risques juridiques & anticiper l’IA Act
Découvrez notre formation sur les risques et obligations liés à l’intelligence artificielle
En savoir plus

Abonnez vous à notre Newsletter

Recevez chaque mois la lettre du DDG Lab sur l’actualité juridique du moment : retrouvez nos dernières brèves, vidéos, webinars et dossiers spéciaux.
je m'abonne
DDG utilise des cookies dans le but de vous proposer des services fonctionnels, dans le respect de notre politique de confidentialité et notre gestion des cookies (en savoir plus). Si vous acceptez les cookies, cliquer ici.