top | item 47197243

Anthropic vs. DoD: "Any lawful use" is a fight about control

2 points| colek42 | 1 day ago

I served 12 years infantry, then built targeting tools at JSOC vs ISIS. Now I lead a team building AI tools automating the compliance process. I’ve got opinions on Anthropic + DoD

When people argue about “AI in weapons” like it’s a sci-fi trigger bot… I can’t take it seriously.

A “kill chain” isn’t a vibe. It’s a process

Find, Fix, Track, Target, Engage, Assess (F2T2EA) and most of it is information work: sorting signal from noise, building confidence, tightening timelines, and getting decisions to the right humans fast enough to matter.

That’s why this Anthropic vs. DoD fight is getting attention. It’s not just “ethics.”

-> It’s about control.

Here’s what’s actually on the table:

Anthropic says they’ll support the military — but they want two carve-outs: no mass domestic surveillance and no fully autonomous weapons (their definition: systems that “take humans out of the loop entirely” and automate selecting/engaging targets).

Anthropic also says DoD demanded “any lawful use” and threatened offboarding / “supply chain risk” pressure if they didn’t comply.

A DoD memo posted on media.defense.gov explicitly calls for models “free from usage policy constraints” and directs adding standard “any lawful use” language into AI contracts.

The dispute escalated fast — including federal offboarding/blacklist actions and a “supply chain risk” designation as reported by major outlets. Now my take, as someone who’s lived inside the targeting reality:

AI can absolutely help the kill chain without ever being the one “pulling the trigger.”

Speeding up Find/Fix/Track/Target changes outcomes — and it’s not hypothetical.

But if we’re going to talk about “any lawful use,” then stop outsourcing national policy to contract fights.

DoD already has policy that autonomous weapon systems should allow appropriate human judgment over the use of force. So the real question isn’t whether humans matter.

It’s this:

Do we want safety and governance implemented at the model layer (vendor guardrails), the contract layer (“any lawful use”), or the law/policy layer (Congress + DoD doctrine + auditing)?

Because “Terms of Service vs. warfighting” is a stupid place to settle a question this big.

If you’ve worked in intel, targeting, acquisition, or governance:

Where should the boundary live? model, contract, or law, and who owns accountability when it breaks?

8 comments

order

tacostakohashi|1 day ago

I think it's good for this boundary to live in multiple places.

haute_cuisine|1 day ago

Snowden already showed what lawful use actually means.

OgsyedIE|1 day ago

GPT, the author of this piece, did not in fact serve 12 years in the infantry.

https://en.wikipedia.org/wiki/lying

colek42|1 day ago

That is quite an ignorant statement to make. I spent three years in combat, and am permanently disabled from my service.