On 29 September 2025, the State of California enacted the SB-53 – Transparency in Frontier Artificial Intelligence Act, introduced by Senator Scott Wiener and signed into law by the Governor of California.
This statute is the first comprehensive legislative framework in the United States specifically dedicated to “frontier AI” models — that is, the most advanced artificial intelligence systems, trained on massive datasets with exceptionally high computational power and capable of being adapted to a broad range of tasks.
The law pursues a dual objective: to strengthen transparency and accountability of AI developers and to prevent severe risks to public safety.
These risks, identified in the Governor’s Report on frontier AI models and during legislative hearings, include, among others, the potential loss of human control over powerful systems, their misuse for malicious purposes (such as cyberattacks or the development of biological or chemical weapons), and their capacity to circumvent human oversight.
SB-53 thus represents a shift from a largely voluntary self-regulatory approach to a binding legal framework, imposing clear obligations of governance and disclosure on the most significant AI actors.
SB-53 requires major developers of “frontier AI” models — particularly those exceeding a high annual revenue threshold and operating at large computational scales — to adopt and publicly disclose a comprehensive internal governance framework, referred to as a “frontier AI framework.”
This framework must specify, among other things:
Before deploying a new or substantially modified model, developers must also publish a transparency report describing the identified risks, the mitigation measures adopted, and, where applicable, the involvement of independent external evaluators.
Where sensitive information must be protected for security or trade-secret reasons, developers must explain the nature of the redactions and retain the full information for potential review by the authorities.
The law further establishes a duty to promptly report severe incidents. For example, an unauthorized disclosure of critical model parameters or a loss of control leading to physical harm must be reported without delay to the competent public authority.
Failure to publish governance policies, submit transparency reports, or report severe incidents may give rise to significant civil penalties, which can reach USD 1 million per violation.
Beyond obligations imposed on private companies, the legislature has tasked the Californian administration with assessing the creation of a public high-performance computing infrastructure, known as “CalCompute.”
This initiative seeks to promote more equitable access to computational resources essential for the development of AI and to support projects in “frontier AI” that are ethical, sustainable, and in the public interest.
The law also strengthens the protection of employees acting as safety whistle-blowers.
It prohibits any contractual provision or corporate practice aimed at restricting or penalising good-faith reporting of activities that may pose a substantial risk to public health or safety.
Major developers must implement an internal, anonymous reporting channel, monitor the follow-up of alerts, and clearly inform employees of their rights.
In cases of retaliation, employees may seek injunctive relief and reimbursement of legal costs through the courts.
SB-53 embodies a forward-looking approach to the governance of systemic risks posed by “frontier AI”, complementing existing data-protection and algorithmic-transparency rules.
It is of particular relevance to European businesses and practitioners for two reasons:
Although a Californian statute, some of SB-53’s obligations may have extraterritorial effects on international developers providing models to users located in California.
The law thus serves as a reference point for policymakers worldwide who seek to close the gap between the speed of technological innovation and the capacity of the law to prevent significant AI-related harms.
By combining mandatory transparency, systematic risk-management duties, and protection for employee whistle-blowers, SB-53 imposes an unprecedented level of governance on the developers of the most advanced “frontier AI” models.
It represents a significant step towards bridging the gap between rapid innovation and the legal safeguards required to protect society, and it signals the emergence of international standards for AI safety.
This article is written by a French attorney and does not constitute legal advice on U.S. law. Any analysis or strategic decision involving the application of SB-53 should be undertaken with the assistance of a qualified U.S. attorney.