top of page

Anthropic-Pentagon AI Battle

  • 1 day ago
  • 3 min read

By Mahika Khosla Mar 31, 2026

The boom of missiles breaking the sound barrier ripples throughout the smoke-streaked sky, fighter jets perform somersaults across the clouds and the motors of deadly kamikaze drones screech over the heads of ducking soldiers as the army slowly advances. With an ever-increasing pace, Artificial Intelligence (AI) and computing are being integrated into weapon systems across the world. Now, the future of such integration between the military and advanced technology is being challenged in a battle between one of the largest U.S. AI companies Anthropic and the Pentagon.

Angelica Liljestam Art
Angelica Liljestam Art

For years, the Pentagon has been working on integrating AI into its cybersecurity-related tasks and autonomous weapon systems. According to the Washington Post, the Pentagon is accelerating their use of AI, partly due to China’s development of advanced weapons such as hypersonic missiles—which pose a unique threat to the United States. Aiding the Pentagon’s goals are four major AI companies: Google, OpenAI, xAI and Anthropic.


Anthropic has played a crucial role in President Donald Trump’s foreign policy. Per the Washington Post, Anthropic’s large language model (LLM) Claude was utilized in Operation Absolute Resolve, a military operation on Jan. 3 where the United States bombed Venezuelan air defenses and security forces in order to capture Venezuela’s acting President Nicolás Maduro. When news of the raid was released, Anthropic stated that it neither approved nor was aware of the usage of Claude to support the raid. Additionally, Anthropic disapproved of such use of AI because it conflicts with their safety policies. Policies like the Usage Policy and the Responsible Scaling Policy are part of the company’s mission to inhibit Claude usage for violent means, especially for “mass domestic surveillance” and “fully autonomous systems.”


In rebuttal, several Pentagon associates have made their thoughts on the situation public. Emil Michael, the Under Secretary of Defense for Research and Engineering, stated that the current government safeguards for surveillance and autonomous systems are already sufficient because they have gone through a democratic process. On Feb. 24, current U.S. Defense Secretary Pete Hegseth demanded the AI company to allow “all lawful use cases” of Claude for the military without limitation by Feb. 27 5:01 p.m. EST.


“I think the term ‘lawful purposes’ is extremely subjective and can be open to interpretation. This may allow the government officials to misuse AI as they please since legality can be altered,” Senior Aarthi Setheruman said.

Anthropic was not able to make an agreement with the Pentagon by the deadline. As a result, the Pentagon designated the company as a “supply chain risk.” Anthropic was blacklisted from U.S. Federal Agency and Defense Contractor use, with the Pentagon hoping to phase out usage of its products within six months.


Numerous companies have since ceased usage of Anthropic’s products. Alexander Harstrick, managing partner at J2 Ventures, told CNBC that 10 of his firm’s portfolio companies “have backed off” from using Claude because of the company’s ties with the Pentagon. Reuters has reported that Lockheed Martin is expected to stop using Anthropic’s products. Such actions may be detrimental to Anthropic because the company receives 80% of its revenue from enterprise customers. Claude was still utilized in the Iran attacks on March 1, per The Washington Post. On March 9, Anthropic filed two lawsuits against the Trump administration, claiming that Pentagon officials are illegally retaliating against the AI company because of its safety policies and are exceeding the scope of the supply chain risk law.


OpenAI—the AI company known for LLM ChatGPT—came to an agreement with the Pentagon before the Feb. 27 deadline. On March 1, OpenAI stated that the contract was adjusted to establish more clear guardrails, specifically in surveillance programs. Autonomous weaponry was not mentioned.


OpenAI’s contract has generated conflict between the company’s staff. According to CNN, one anonymous employee had told the television news channel that some OpenAI employees “really respect” Anthropic for standing up for themselves and are frustrated with how OpenAI handled their own contract.


“I feel uneasy about military usage of AI because AI has been known to contain biases. Generative AI should definitely not be used in the military except for simple things like sorting through data. Moreover, a limitation of using AI for such purposes would be that it could be trained on unreliable data that is not fully representative of the truth and there will be less human oversight,” Junior Hrishita Diwan said.

AI’s future role is constantly shaped by its power and human decisions. However, Anthropic and the Pentagon’s conflict raises the question of whether militaristic goals and ethical usage will be able to coexist.

About the Contributors


Mahika Khosla

Staff Writer


Mahika Khosla is a senior at Leland High School and a Staff Writer for The Charger Account. Some of her favorite activities are watching Shark Tank, visiting National Parks, and rooting for the Warriors.










Angelica Liljestam

Artist

Comments


Have any questions? Want to make any suggestions? Contact us at 

We'll reply as soon as we can!

  • Facebook
  • Instagram

Notice any mistakes?

Contact us here!

bottom of page