profile

Innovating with AI

Anthropic isn't the "good guy" you think it is


Hey Reader,

This week, the U.S. Department of Defense moved to end its contract with Anthropic and switch to OpenAI for working with classified documents with AI.

On the surface, this looks like a battle between good guys who want more guardrails (Anthropic) and madmen who want AI to shoot people (the Department of Defense), with OpenAI selling its soul to poach a $200 million contract.

That's how I originally thought about it – but as I researched it more deeply for this newsletter, my opinion on Anthropic's position changed.

While it's true that Anthropic's guardrails exist – they prohibit Claude, their AI model, from being used for surveillance of American citizens or autonomous firing of weapons – these limitations prevent only a tiny sliver of what's possible in terms of military AI.

So, if you are concerned about the creation of killer robots, Anthropic may be a lesser evil, but they're still actively and proudly building military AI tech.

From Bloomberg:

Anthropic PBC was among the artificial intelligence companies that submitted a proposal earlier this year to compete in a $100 million Pentagon prize challenge to produce technology for voice-controlled, autonomous drone swarming...
The company made the submission during fraught negotiations with the Defense Department over Anthropic’s red lines for how its technology was to be used by the military...

Executives at the $380 billion company have repeatedly insisted they support the extensive lawful use of AI in combat, short only of mass domestic surveillance and “fully autonomous weapons.”

Anthropic makes great software and AI models that I use every day. They've done a great job advocating for AI safety in writing and in public appearances by their leaders. And I'm not against companies working with the military.

But the reality is that most of what Anthropic does for AI safety comes in the form of words rather than action.

Because they are one of the few companies that even attempts to gesture at AI ethics, they stand out in the industry for this and it earns them praise from consumers and industry observers (as well as the disdain of the Trump administration).

But, in practice, Anthropic still wants to make money building military AI and has no qualms about AI being used for killing. They have created their own "red line" about fully autonomous weapons, but they apparently believe that an armed drone swarm is OK because it has a "human in the loop" – that is, at least one person can disable the hundreds of armed drones they are piloting.

In my view, that's a pretty shady distinction that makes me wonder how much the so-called "red line" really matters in practice.

This was a silly stunt by the Defense Department, but don't give Anthropic undue credit for saying 'no'

Anthropic is generally being praised for standing up to the DoD, which wanted them to drop their "no autonomous firing" and "no surveillance of Americans" limitations.

The media stunt of making a fuss about these (largely theoretical) guardrails made sense for Secretary of Defense Pete Hegseth – he wants to bolster his persona as a guy who "fights the woke." Just ignore the fact that the Trump administration signed this deal with Anthropic seven months before they cancelled it. 🤷‍♂️

But, cynically, I think it also made strategic sense for Anthropic to lose its $200 million contract. Not for moral reasons, but because:

  1. The actions taken by the Trump administration against Anthropic will probably be reversed in court.
  2. Anthropic gets to bask in the glory of being the "good guy" while also not having to swear off AI weaponry.

Anthropic signed this deal in July 2025 with full knowledge of how Trump and Hegseth view the world. It is hard for me to believe the folks who run Anthropic were surprised that the Trump administration might push the boundaries of morality and reason.

I think Anthropic's actions show that they are basically doing normal capitalism – taking the money with minimal moral boundaries – while also benefiting from a good public-relations campaign about AI safety.

It also helps that Elon Musk and OpenAI's Sam Altman, two of Anthropic's biggest competitors, basically can't say anything about AI without sounding like a comic-book villain.

But I would argue that Anthropic is still way more permissive about military AI use than the median American would be if they fully explored this question.

But, we also want the American military to win

Even though things are suboptimal in terms of American civilian leadership right now, most IWAI readers live in places that generally benefit from America's military. I certainly benefit in Colorado, but even outside North America, most of the world is safer because America spends more on its military than the next 7 countries combined.

So, while my opinions on specific American military action vary, I think it is generally good for us to use technology to bolster our military's capabilities. This is also the goal of the on-again, off-again Chip War restrictions on Nvidia selling advanced AI hardware to China. Americans generally agree that we want to stay ahead in military AI, even if the details are debatable.

My desire to see the American military succeed also led me to the conclusion that this whole escapade was primarily a PR stunt.

In prepping for this post, I did a bunch of digging for evidence that Anthropic's terms were actually limiting the effectiveness of our armed forces. I take the safety of our service members and the success of our military very seriously, so I'd be willing to concede some guardrails if they were truly hurting a mission.

Perhaps unsurprisingly, I was unable to find any military source – uniformed personnel or prominent former military leaders – who publicly stated that Anthropic's (very minimal) guardrails were a problem for real-world military success.

To me, it seems that this conflict was almost entirely the invention of Hegseth and his civilian team, who thought Anthropic was just dandy in July and changed their minds in February.

So, what is actually different about Anthropic?

  1. They make some of the best AI models – and have had a great run of PR lately, even though the undercurrent was "Claude is going to replace millions of workers."
  2. They say nicer things about AI ethics and safety... but they are not that much different than other AI labs in practice.
  3. In a world where a lot of AI company leaders (and the Secretary of Defense) seem to be trolling people on AI safety for fun, Anthropic stands out as relatively better.
  4. Big companies like Google and Microsoft are still better at the balance of working on government contracts – they have decades of experience threading this needle.

Yes, Anthropic looks good relative to a bunch of jerks. But I wouldn't get too attached to them just because of their professed values. I suspect that they'll continue to push the frontiers of what's possible – and what's ethical – just like all their competitors.

Until next time,

– Rob
CEO of Innovating with AI


Did someone forward you this email? Join our free newsletter here.

Want to share this post? Here's the link to read online and share.

Innovating with AI

Coaching, community & curriculum to help everyone thrive in our AI‑powered future.

Share this page