Anthropic Pentagon contract: US Defense Secretary Pete Hegseth warns Anthropic to allow full military use of its ai or risk losing Pentagon contract
Officials also warned they could use the Defense Production Act, which can legally force companies to support military needs, as reported by AP News. A meeting between Hegseth and Anthropic CEO Dario Amodei happened Tuesday, and the discussion was described as polite but tense. Amodei refused to change two key rules: No fully autonomous AI weapons targeting, and No AI surveillance of US citizens.
What Anthropic does & why it matters
Anthropic created the AI chatbot Claude and is the only major AI firm not fully supporting the Pentagon’s internal AI network yet. The Pentagon gave AI contracts worth up to $200 million each to four companies: Anthropic, Google, OpenAI, and xAI, as stated by AP News. Anthropic was the first approved for classified military networks, while others mainly work on unclassified systems. Hegseth recently praised only Google and xAI, saying the military does not want AI that refuses to help fight wars.
Why Anthropic is worried
CEO Amodei says powerful AI could be dangerous if used for, Autonomous weapons, Mass surveillance, and Tracking public dissent. In a recent essay, Amodei warned AI risks could become very serious by 2026 if not carefully managed. Anthropic has long promoted itself as a “safety-focused” AI company since its founders left OpenAI in 2021. Experts say Anthropic has limited power because its competitors already agreed to military use rules.
Political & policy tensions
Anthropic previously worked closely with the Joe Biden administration on AI safety checks. It has clashed with the Donald Trump administration over AI regulations and chip export rules. Trump’s AI adviser David Sacks accused Anthropic of using fear to push regulation. Anthropic co-founder Jack Clark said AI development needs “balanced optimism and fear”, as stated by AP News. The company has also publicly criticized chip maker Nvidia over policy issues, despite being partners.
Experts warn the Pentagon’s fast adoption of AI in war and surveillance raises serious legal and ethical questions. Some legal analysts say US laws are not keeping up with fast AI development, especially for monitoring citizens. The US government is pressuring Anthropic to fully support military AI use, but the company is resisting because of safety and ethical concerns — creating a major clash between national security goals and AI responsibility.
FAQs
Q1. Why is the US government pressuring Anthropic?The Pentagon wants the company to allow its AI to be used in all legal military work or it may lose its contract.
Q2. What is Pete Hegseth asking from Anthropic?
He wants the company to approve full military use of its AI technology, including defense operations.









































Post Comment