


The lawsuit brought by Anthropic, one of the leading developers of generative artificial intelligence systems, against the United States Department of War and several federal officials illustrates the growing tensions between national security imperatives, governmental authority, and the ethical governance of advanced artificial intelligence technologies.
The full text of the complaint filed by Anthropic may be accessed by clicking on the following link.
In proceedings filed before the United States District Court for the Northern District of California, Anthropic challenges an exceptionally far-reaching administrative decision: a Secretarial Order prohibiting any contractor, supplier, or partner doing business with the U.S. military from engaging in commercial activities with the company. According to Anthropic, the measure, presented as a national security determination, constitutes in reality an unlawful sanction and a form of governmental retaliation linked to the company’s public positions regarding the ethical limits of artificial intelligence deployment.
The dispute therefore raises significant legal questions at the intersection of U.S. administrative law, constitutional protections of free speech, and the governance of frontier AI systems, at a time when artificial intelligence technologies have become central to strategic and military capabilities.
At the core of the dispute lies a Secretarial Order issued on February 27, 2026 by the Secretary of the Department of War, designating Anthropic as a “supply-chain risk to national security.”
The consequences of this designation are immediate and particularly extensive. The order provides that no contractor, supplier, or partner doing business with the United States military may engage in any commercial activity with Anthropic. The measure is described as immediately effective and final, which in practice amounts to excluding the company from a significant portion of the technology ecosystem linked to U.S. defense activities.
In its complaint, Anthropic argues that this decision constitutes a disguised administrative sanction, directly affecting its ability to compete for federal contracts and to maintain commercial relationships with numerous industrial partners.
The company further notes that it had previously maintained a close working relationship with the Department of War and that its model Claude had become one of the most widely used artificial intelligence systems within the U.S. federal administration, including in certain classified environments.
Against this background, the contested decision appears particularly striking as it abruptly interrupts what had previously been an advanced technological partnership between the company and the federal government.
Behind the legal controversy lies a fundamental technical and ethical disagreement regarding the permissible uses of Anthropic’s artificial intelligence models, particularly its flagship system Claude.
Since its founding, Anthropic states that it has structured the development and deployment of its models around a framework of responsible use policies that incorporate specific technical restrictions. Among these are two key limitations: a prohibition on the use of its systems for autonomous lethal warfare and a prohibition on mass surveillance of U.S. citizens.
According to the company, these restrictions reflect its assessment of the current capabilities and limitations of large-scale AI systems, including their potential to produce erroneous outputs and the risks associated with deploying such technologies in highly sensitive operational contexts.
Anthropic therefore maintains that it has never validated the use of Claude within lethal autonomous weapons systems, arguing that it does not currently possess sufficient guarantees regarding the reliability and safety of such applications. Likewise, the company considers that the ability of its models to process and analyze vast volumes of data could, in the absence of adequate safeguards, facilitate large-scale population surveillance, which justifies the maintenance of specific usage restrictions.
The Department of War reportedly demanded that Anthropic remove these limitations and replace them with a general policy allowing the government to make “any lawful use” of the technology.
While Anthropic states that it agreed to several adjustments to its usage policies in order to accommodate certain operational requirements of the federal administration, it refused to eliminate the two restrictions that it considers essential to the safe and responsible governance of its systems.
This disagreement, centered on the extent to which advanced artificial intelligence systems may be deployed for military and national security purposes, ultimately led to a breakdown in the relationship between the company and the federal authorities.
From a legal standpoint, Anthropic’s complaint is based primarily on two grounds.
The first relates to U.S. federal administrative law, and more specifically to the provisions of the Administrative Procedure Act (APA). The company argues that the Secretarial Order constitutes a “final agency action”, producing immediate legal effects and therefore subject to judicial review.
According to Anthropic, the decision is unlawful because it exceeds the statutory authority granted to the Department and because it constitutes a measure that is arbitrary, capricious, or otherwise not in accordance with law within the meaning of Section 706 of the Administrative Procedure Act.
Anthropic further contends that the administration relied on considerations unrelated to the legally relevant criteria, including the company’s public positions in policy debates concerning artificial intelligence safety and ethics.
The second ground raised by the company is constitutional in nature. Anthropic alleges that the measures adopted by federal authorities amount to retaliation for the company’s public speech, in violation of the First Amendment to the United States Constitution, which protects freedom of expression and the right to petition the government.
Conclusion
The case of Anthropic v. U.S. Department of War et al. illustrates in a particularly striking manner the emerging tensions between the development of advanced artificial intelligence technologies, state sovereignty, national security imperatives, and the ethical governance of technological innovation.
Beyond the dispute between a technology company and federal authorities, the litigation raises a broader structural question that is likely to shape the future legal framework governing artificial intelligence: who should determine the limits on the use of frontier AI systems? The private companies that develop these technologies, guided by internal governance and safety principles, or public authorities acting in the name of national security and strategic interests.
The decision eventually rendered by the federal court may therefore establish an important precedent in defining the legal boundaries governing the relationship between AI developers and public authorities, at a time when artificial intelligence has become one of the central technological and geopolitical issues of the twenty-first century.
Disclaimer
This article provides a legal analysis written by a lawyer admitted to practice under French law, based on publicly available court filings.
It does not constitute legal advice regarding U.S. law. For any legal analysis or advice concerning the application of United States law, readers should consult a lawyer admitted to practice in the United States.
