Microsoft has delivered its verdict on Anthropic’s legal battle against the Pentagon in the clearest possible terms by filing an amicus brief in a San Francisco federal court: the AI industry must stand together against government overreach in the governance of artificial intelligence. The brief called for a temporary restraining order against the Pentagon’s unprecedented supply-chain risk designation and argued that the designation threatens the technology supply chains critical to national defense. Amazon, Google, Apple, and OpenAI have delivered the same verdict through a separate joint filing, completing what is now a unanimous statement from the top tier of the American technology industry.
The overreach that provoked this verdict began when the Pentagon labeled Anthropic a supply-chain risk after the company refused to allow its Claude AI to be used for mass surveillance of US citizens or autonomous lethal weapons during a $200 million contract negotiation. Defense Secretary Pete Hegseth formalized the designation following the breakdown of talks, and Anthropic’s government contracts began to be cancelled. The company filed two simultaneous lawsuits in California and Washington DC challenging the designation as unconstitutional and unprecedented.
Microsoft’s verdict is backed by its direct integration of Anthropic’s technology into federal military systems and its participation in the Pentagon’s $9 billion cloud computing contract. Additional agreements with defense, intelligence, and civilian agencies worth several billion dollars more give Microsoft unique standing to speak on the relationship between AI technology and national security. Microsoft publicly argued that the government and technology sector needed to work together to ensure advanced AI served national security without crossing ethical lines related to surveillance or autonomous warfare.
Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of ideological retaliation for the company’s publicly stated AI safety positions. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations, which it said was the genuine basis for the restrictions it sought. Anthropic noted that no US company had ever previously received this designation, underscoring its unprecedented and retaliatory nature.
Congressional Democrats have separately demanded answers from the Pentagon about whether AI was used in a strike in Iran that reportedly killed over 175 civilians at an elementary school. Their inquiries are adding a legislative dimension to an already extraordinary multi-front confrontation. Together, Microsoft’s unambiguous verdict, the unanimous industry coalition, and congressional scrutiny represent the most powerful and comprehensive challenge to government overreach in AI governance that the United States has ever seen. The outcome of this case may determine the rules of the relationship between AI and the military for generations to come.