Microsoft, Google and Amazon have moved to reassure business and public‑sector customers that Anthropic’s Claude artificial intelligence models will remain available for commercial and civilian use, despite a high‑profile dispute between the U.S. Department of Defense and the San Francisco‑based AI startup over military applications of its technology.

The three tech giants, which all partner with Anthropic to offer Claude through their respective cloud platforms and AI product suites, say the Pentagon’s decision to label Anthropic a “supply‑chain risk” is narrowly targeted at direct U.S. defense work and does not require a broader shutdown of Claude for non‑defense workloads.

Their coordinated message is aimed at containing confusion that spread after the Defense Department terminated a major contract with Anthropic and moved to bar the company’s tools from Pentagon projects, following a standoff over limits on military uses such as mass surveillance and autonomous weapons.

Microsoft: Claude stays for non‑defense customers

Microsoft was the first of the big cloud vendors to publicly clarify its stance, telling customers that Claude would remain part of its AI offerings outside of defense‑related projects.

The company, which has a multi‑billion‑dollar investment in Anthropic and offers Claude through its Azure AI Foundry, GitHub and certain Microsoft 365 integrations, said an internal legal review concluded it could continue working with Anthropic for non‑defense work even after the Pentagon’s designation.

“Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers  other than the Department of War through platforms such as M365, GitHub, and Microsoft’s AI Foundry, and that we can continue to work with Anthropic on non-defense related projects,” a Microsoft spokesperson said, in a statement first reported by U.S. media outlets.

By drawing a sharp line between defense and non‑defense business, Microsoft is effectively creating an internal firewall: Claude will not be offered into Department of Defense projects that fall under the new restrictions, but will continue to power use‑cases like software development, knowledge management, customer support and productivity features across commercial, educational and non‑defense public‑sector customers.

Microsoft’s position is significant given its deep footprint in U.S. government computing and its role as a major contractor across civilian and defense agencies. Had it interpreted the Pentagon’s move more broadly, thousands of organizations using Claude via Microsoft could have been forced to migrate to alternative models with little warning.

Google: Anthropic remains on Google Cloud for civilian workloads

Shortly after Microsoft’s clarification, Google issued its own assurance to customers of Google Cloud and its AI services. Google has partnered with Anthropic since 2023, both as an investor and as a cloud provider, and offers Claude models alongside its own Gemini family for enterprise AI workloads.

In a statement, a Google spokesperson said the company’s understanding of the Pentagon’s decision was that it did not prohibit collaboration with Anthropic on non‑defense projects.

“We understand that the Determination does not preclude us from working with Anthropic on non-defense related projects, and their products remain available through our platforms, like Google Cloud,” the spokesperson said.

That means Claude will continue to be available for typical enterprise scenarios such as document summarisation, code assistance, internal chatbots and data analysis, as long as those workloads are not directly connected to Department of Defense programmes affected by the supply‑chain risk determination.

Google’s clarification is particularly relevant for systems integrators, consultancies and software firms that work with both Google Cloud and U.S. government agencies. Many of these organisations had wanted to know whether simply holding Pentagon contracts would force them to stop using Anthropic tools in other parts of their business.

Amazon: AWS keeps Claude for non‑defense users

Amazon, which has a broad strategic partnership and investment agreement with Anthropic through Amazon Web Services (AWS), also signalled that its customers would continue to have access to Claude for non‑defense workloads.​

AWS offers Claude models as part of its Bedrock generative AI service, alongside models from Amazon and other providers. According to reports, Amazon’s position mirrors that of Microsoft and Google: customers can keep using Claude in most scenarios, but not for workloads that fall under the Pentagon’s restriction.​

One report summarised Amazon’s stance by noting that “Amazon joined Microsoft and Google in [continuing] to offer Anthropic's Claude AI technology to customers after the Pentagon deemed it a ‘supply chain risk,’” and that AWS would permit use of Claude for “non-defense workloads.”​

The combined message from the three cloud providers was captured in another media report: “Microsoft, Google, and Amazon confirmed that Anthropic's Claude AI remains available for non-defense customers despite a dispute between Trump's Defense Department and Anthropic. The conflict won't affect other companies using Claude through Microsoft and Google's products, ensuring continued access for commercial and civilian applications.”​

The Pentagon’s move against Anthropic

The reassurances from the cloud giants came after an unusually public confrontation between Anthropic and the U.S. Department of Defense over the terms under which the Pentagon could use the company’s AI models.

Late in February, U.S. Defense Secretary Pete Hegseth formally designated Anthropic a “supply‑chain risk to national security” and moved to cut off a Pentagon contract reportedly worth around 200 million dollars. The move followed months of negotiations over policy language and came against the backdrop of broader debates about the role of generative AI in warfare and surveillance.

According to detailed accounts of the dispute, Anthropic insisted on including explicit prohibitions in its defense contracts against using Claude for “mass domestic surveillance of Americans” and for “fully autonomous weapons.” The Pentagon sought to rely instead on its standard requirement that vendors allow technology to be used for “all lawful purposes,” arguing that it does not sign up to specific policy constraints written by individual contractors.

On Friday, the Pentagon “cut ties with Anthropic, the company behind Claude AI,” with Hegseth branding it a supply‑chain risk and ordering that “no contractor, supplier, or partner doing business with the US military can deal with Anthropic,” according to one report. The designation is part of a legal mechanism that had previously been used mainly against foreign companies seen as security threats, such as Chinese telecom vendors.

In parallel, President Donald Trump ordered U.S. federal agencies to phase out Anthropic’s products from government systems, deepening the impact inside Washington, even as the company’s commercial growth continued in the private sector.

Anthropic: stance unchanged, customers protected

Anthropic responded to the Pentagon’s decision with a public statement and a detailed blog post, arguing that the “supply‑chain risk” label has been widely misinterpreted and does not apply to the vast majority of its customers.

In a post titled “Where things stand with the Department of War,” Anthropic co‑founder and chief executive Dario Amodei said the company did not intend to soften its safety commitments under pressure. “These threats do not alter our stance. We cannot in good faith comply with [the Pentagon’s] request,” he wrote, referring to the demand that Anthropic allow its tools to be used for any lawful purpose.

Amodei sought to reassure partners by clarifying how Anthropic interprets the designation. “With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts,” he wrote.

He added: “Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.”

In practice, that means a large defense contractor that uses Claude for everyday business activities such as HR support, marketing content or internal software tools might not be affected, provided those uses are clearly ring‑fenced from any Pentagon‑funded work. However, projects that integrate Claude directly into defense systems, intelligence analysis pipelines or other military‑linked applications would now fall foul of the blacklist and need to move to alternative models.

Anthropic has described the designation as “unlawful and politically motivated,” arguing that it sends a negative signal to companies trying to set ethical limits around powerful technologies. The company has also indicated that it is prepared to challenge the decision through legal avenues, though no formal lawsuit has yet been announced.

Contrast with OpenAI’s approach

The Pentagon’s action against Anthropic has drawn attention to the contrasting strategy of Anthropic’s main rival, OpenAI, which has moved to deepen its cooperation with the U.S. military.

Shortly after the Trump administration’s order to remove Anthropic from government systems, reports surfaced that OpenAI had struck an agreement to provide its models for classified Department of Defense networks, effectively stepping into the space that Anthropic had vacated.​

In internal messages reported by U.S. media, OpenAI chief executive Sam Altman told employees that the company was negotiating with the Pentagon to deploy its technology on classified systems, while at the same time seeking “exclusions against domestic surveillance and the operation of autonomous weapons without human oversight.”​

While those assurances have not been fully detailed in public, experts note that it is relatively rare for defence contractors to formalise such limits in their contracts. Jerry McGinn, director at a Washington‑based think tank, observed that “contractors seldom dictate how their products can be used,” underlining how Anthropic’s attempt to codify its red lines represents a break with past practice.​

The episode has intensified a broader policy debate in Washington and Silicon Valley about whether and how AI labs should set binding restrictions around military use of frontier systems, or whether those decisions should remain primarily with elected officials and defence agencies.

Impact on defense and commercial users

In the short term, the Pentagon’s move is already reshaping parts of the U.S. defence industrial base. Some contractors have reportedly instructed staff to stop using Claude in any work that touches military projects, and to shift to alternative models from OpenAI, Google, Amazon or in‑house solutions.

For commercial users and non‑defense public‑sector bodies, however, the combined messages from Anthropic and its three major cloud partners point to continuity rather than disruption. “Cloud vendors are letting customers know that Anthropic's popular AI tools can still be accessed after the Department of Defense blacklisted the company,” one report noted, adding that only defence‑related projects fall under the new restriction.

Another report captured the consensus view: “The conflict won't affect other companies using Claude through Microsoft and Google's products, ensuring continued access for commercial and civilian applications.”​

Anthropic says demand for Claude among businesses and developers has remained strong, and in some cases has increased, as the public dispute draws attention to the company’s safety‑first branding. The firm continues to roll out new versions of its models and expand into additional markets through its partnerships with Microsoft, Google and Amazon.

At the same time, legal and policy analysts are watching closely to see whether the U.S. government’s use of the “supply‑chain risk” designation against a domestic AI company becomes a one‑off or a precedent. In the past, similar tools have been applied mainly to foreign companies seen as security threats; using the mechanism in a contractual dispute over ethical clauses marks a new and controversial step.

For now, the practical outcome for most organisations using Claude is limited: as long as their deployments are not tied directly to Department of Defense contracts covered by the determination, Microsoft, Google and Amazon say they can continue to rely on Anthropic’s AI as before.

The longer‑term consequences may depend on how courts, regulators and future administrations view the balance between national‑security discretion and AI companies’ attempts to set hard safety lines and whether more vendors choose to follow Anthropic’s path of refusing certain military uses, or OpenAI’s path of negotiating guardrails while deepening ties with the Pentagon.

Comments