A U.S. federal judge on Tuesday raised serious legal concerns over the Pentagon’s decision to blacklist artificial intelligence firm Anthropic, suggesting the move may amount to unconstitutional retaliation for the company’s stance on military use of AI.

During a hearing in California, U.S. District Judge Rita Lin said the Defense Department’s designation of Anthropic as a “national security supply-chain risk” appeared to be an attempt to punish the company for publicly opposing certain military applications of its technology. The label effectively bars Anthropic from key defense contracts and could lead to significant business losses.

The case stems from a lawsuit filed by Anthropic earlier this month, challenging the designation imposed under the authority of Defense Secretary Pete Hegseth. The company argues the government exceeded its powers under federal procurement law and violated constitutional protections.

At the center of the dispute are claims under the First and Fifth Amendments. Anthropic alleges that the blacklisting was retaliatory, targeting the company for its public criticism of AI use in surveillance and autonomous weapons systems, thereby infringing on its right to free speech. It also contends that it was not given an opportunity to contest the designation, violating due process guarantees.

Judge Lin indicated during the hearing that the government’s action “looks like an attempt to cripple” the company, questioning whether the national security justification was being used appropriately. She is expected to issue a written ruling soon on Anthropic’s request for a temporary order to block the designation while the case proceeds.

The Pentagon’s move is considered unprecedented. The supply-chain risk designation has traditionally been used to restrict foreign entities suspected of posing threats to U.S. military systems. Its application to a domestic AI firm has raised legal questions about the scope of executive authority in national security procurement.

Anthropic’s legal team argued that the government misapplied the statute to penalize the company during a contract dispute. According to its lawyers, such an interpretation would allow federal agencies to label vendors as security risks based on disagreements over contract terms or policy positions.

The U.S. government defended its decision in court, stating that Anthropic’s restrictions on the use of its AI model, Claude, created operational risks. Justice Department lawyers argued that the military must be able to rely on consistent and unrestricted access to critical technologies, and that any limitations imposed by a vendor could jeopardize national security.

Government counsel also raised concerns about hypothetical risks, including the possibility of software changes affecting system performance during military operations. Such uncertainties, they argued, justified the precautionary designation.

Anthropic, however, maintains that its policies are based on safety and rights considerations. The company has stated that current AI systems are not sufficiently reliable for use in autonomous weapons and has opposed domestic surveillance applications on civil liberties grounds.

The case highlights a growing legal and policy conflict between government efforts to integrate AI into defense systems and the ethical boundaries set by private technology companies. Legal experts say the outcome could have far-reaching implications for federal contracting, particularly in sectors involving emerging technologies.

In addition to the California case, Anthropic has filed a separate lawsuit in Washington, D.C., challenging another supply-chain risk designation that could extend its exclusion to civilian government contracts.

The court’s upcoming decision on interim relief is expected to shape the immediate future of the dispute, while the broader case may set an important precedent on how constitutional protections apply in national security procurement decisions.