Pentagon pressures AI firm Anthropic over military use safeguards

The U.S. Department of Defense is reportedly pressuring artificial intelligence developer Anthropic to ease safeguards that restrict military applications of its technology, according to sources famil

The U.S. Department of Defense is reportedly pressuring artificial intelligence developer Anthropic to ease safeguards that restrict military applications of its technology, according to sources familiar with the matter The dispute centers on the Pentagon's desire for the company to lift restrictions that prevent its AI models from being used in systems for autonomous weapons targeting and domestic surveillance

The issue was the subject of a high-level meeting at the Pentagon between U.S. Secretary of Defense Pete Hegseth and Anthropic CEO Dario Amodei While Anthropic has reportedly been resistant to removing these ethical guardrails, recent changes to the company's internal safety policy suggest it may be considering a more flexible stance on military use cases

This developing situation highlights the growing tension between U.S. national security interests and the safety protocols established by private AI labs. The Pentagon's push for fewer restrictions sets a potentially significant precedent for how the government engages with leading technology firms on the deployment of powerful, dual-use AI systems. The outcome of these discussions could shape the future of AI governance and its role in defense and intelligence operations.

What's your reaction?

ISN MEDIA

ISN MEDIA

Aurthor