A group of nearly 40 researchers and engineers from OpenAI and Google, including Google DeepMind chief scientist Jeff Dean, have filed an amicus brief supporting a lawsuit by AI firm Anthropic against the U.S. Department of Defense The brief marks a significant public alignment of leading AI practitioners with Anthropic in its escalating dispute with the Pentagon over the military use of artificial intelligence
The conflict began after the DoD designated Anthropic a "supply chain risk" on Monday, a label typically reserved for foreign companies considered a potential threat to national security The designation followed a breakdown in negotiations between the AI company and the Pentagon Anthropic had refused to allow its AI model, Claude, to be used for analyzing bulk commercial data for mass domestic surveillance of Americans or for developing autonomous weapons that can kill targets without human oversight
This standoff brings a critical policy debate into the legal system, highlighting a deep, unresolved question in U.S. law: the extent to which the government can legally surveil Americans using advanced AI to analyze vast quantities of commercially available data More than a decade after Edward Snowden’s revelations, a legal gap persists between public expectations of privacy and the government's data collection capabilities, now amplified by AI
The support for Anthropic from prominent figures at rival companies underscores a growing rift between parts of the technology sector and the U.S. national security establishment. The outcome of the lawsuit could set a major precedent for the ethical guardrails and legal frameworks governing public-private partnerships in the development and deployment of strategic AI technologies.








