WASHINGTON D.C. – The U.S. Department of Defense (DoD) experimented with OpenAI’s artificial intelligence models through Microsoft before the AI developer formally lifted its ban on military applications, according to a recent report This development underscores the Pentagon's aggressive strategy to integrate advanced commercial AI, potentially moving faster than the public policies of the technology firms themselves.
In January 2024, OpenAI updated its usage policy, removing a blanket prohibition on “military and warfare” applications. The company clarified that the change was meant to enable certain U.S. national security use cases, while maintaining its ban on using the technology for developing weapons, causing harm, or destroying property However, sources allege that the Pentagon’s experimentation with the technology, facilitated through Microsoft's government cloud services, predates this official policy change
This situation highlights the complex and often opaque relationship between leading technology firms, their government partners, and their stated ethical guidelines. While OpenAI sets usage policies, its models are delivered to major clients like the U.S. government through intermediaries such as Microsoft, creating potential gray areas in oversight and enforcement. The alleged early testing suggests the DoD is determined to maintain a technological advantage in the escalating global AI competition, particularly with China, by proactively exploring commercial AI capabilities.
The development raises critical questions about the governance of powerful, dual-use AI technologies. It puts a spotlight on the contractual and ethical frameworks that govern how sovereign states adopt commercial AI for defense and security purposes, and whether corporate policies can be effectively enforced when national security interests are at stake








