OpenAI agrees with Dept. of War to deploy models in their classified network
1382 points| eoskx | 1 day ago |twitter.com
https://fortune.com/2026/02/27/openai-in-talks-with-pentagon...
1382 points| eoskx | 1 day ago |twitter.com
https://fortune.com/2026/02/27/openai-in-talks-with-pentagon...
Some comments were deferred for faster rendering.
Imnimo|1 day ago
tedsanders|1 day ago
zingerlio|1 day ago
[1]: https://www.wired.com/story/openai-staff-walk-protest-sam-al...
tempaccount420|1 day ago
In my mind the only people left are those who are there for the stocks.
arugulum|1 day ago
But they did.
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
2snakes|1 day ago
weatherlite|1 day ago
Well some may voluntarily leave, some will be actively poached by Anthropic perhaps and some I suppose will stay in their jobs because leaving isn't an easy decision to make.
coliveira|1 day ago
miohtama|1 day ago
https://www.theguardian.com/world/2026/feb/21/tumbler-ridge-...
granzymes|1 day ago
Have we been watching the same Trump admin for the last year? That sound exactly like something the government would do: pointlessly throw a fit and end up signing a worse deal after blowing up all political capital.
ecocentrik|1 day ago
ivan_gammel|1 day ago
crowcroft|2 hours ago
blueblisters|1 day ago
But what's the most charitable / objective interpretation of this?
For example - https://x.com/UnderSecretaryF/status/2027594072811098230
Does it suggest that determination of "lawful use" and Dario's concerns falls upon the government, not the AI provider?
Other folks have claimed that Anthropic planned to burn the contentious redlines into Claude's constitution.
Update: I have cancelled my subscriptions until OpenAI clarifies the situation. From an alignment perspective Anthropic's stand seems like the correct long-term approach. And at least some AI researchers appear to agree.
cedws|1 day ago
Analemma_|1 day ago
It's absurd, and doubly so if OAI's deal includes the same or even similar redlines to what Anthropic had.
manmal|1 day ago
gabeh|1 day ago
unfunco|1 day ago
user0648|1 day ago
jobs_throwaway|1 day ago
tim333|1 day ago
ukblewis|1 day ago
[deleted]
quantumwannabe|1 day ago
>The axios article doesn’t have much detail and this is DoW’s decision, not mine. But if the contract defines the guardrails with reference to legal constraints (e.g. mass surveillance in contravention of specific authorities) rather than based on the purely subjective conditions included in Anthropic’s TOS, then yes. This, btw, was a compromise offered to—and rejected by—Anthropic.
https://x.com/UnderSecretaryF/status/2027566426970530135
> For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms. This, again, is a compromise that Anthropic was offered, and rejected.
> Even if the substantive issues are the same there is a huge difference between (1) memorializing specific safety concerns by reference to particular legal and policy authorities, which are products of our constitutional and political system, and (2) insisting upon a set of prudential constraints subject to the interpretation of a private company and CEO. As we have been saying, the question is fundamental—who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.
> It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here
https://x.com/UnderSecretaryF/status/2027594072811098230
toraway|1 day ago
MostlyStable|1 day ago
Nothing in the quoted text comes anywhere close to the realm of justifying the retaliatory actions.
advisedwang|1 day ago
1. We've seen government lawyers write memos explaining why such-and-such obviously illegal act is legal (see: torture memo). Until challenged, this is basically law.
2. We've seen government change the law to make whatever they want legal (see: patriot act)
3. We've seen courts just interpret laws to make things legal
A contractor doesn't realistically have the power to push back against any of these avenues if they agree to allow anything legal.
(At the risk of triggering Godwin's Law, remember that for the most part the Holocaust was entirely legal - the Nazi's established the necessary authorization. Just to illustrate that when it comes to certain government crimes, the law alone is an insufficient shield.)
makeramen|1 day ago
So the question is: do you trust the government to effectively govern its own use of AI? or do you trust Anthropic's enforcement of its TOS?
ignoramous|1 day ago
Does the qualifier "domestic" for mass surveillance mean that OpenAI allows the use of its models for whatever isn't "domestic"?
ajkjk|1 day ago
SpicyLemonZest|1 day ago
If his characterization of the agreement is correct, which I will not believe and you should not believe until a trustworthy news outlet publishes the text, I suppose this would convince me that Hegseth does not literally plan to build a Terminator for democracy-ending purposes. There's a lot of inexcusable stuff here regardless, but perhaps merely boycotting OpenAI and the US military would be a sufficient response if this all checks out.
cube00|1 day ago
ChatGPT maker OpenAI has the same redlines as Anthropic when it comes to working with the Pentagon, an OpenAI spokesperson confirmed to CNN.
https://edition.cnn.com/2026/02/27/tech/openai-has-same-redl...
slim|1 day ago
skybrian|1 day ago
spongebobstoes|1 day ago
westjerry35|1 day ago
[deleted]
spprashant|1 day ago
deaux|1 day ago
swat535|1 day ago
Anyone thinking they have any virtue is naive.
push0ret|1 day ago
arppacket|1 day ago
A few months down the line, OpenAI will quietly decide that their next model is safe enough for autonomous weapons, and remove their safeguard layer. The mass surveillance enablement might be an indirect deal through Palantir.
yoyohello13|1 day ago
harmonic18374|1 day ago
fintechie|1 day ago
fwlr|1 day ago
unknown|1 day ago
[deleted]
foobarqux|1 day ago
Anthropic said that mass surveillance was per se prohibited even if the government self-certified that it was lawful.
lathgan|1 day ago
https://www.binance.com/en/square/post/35909013656801
I'm sure more will drop in the coming months.
Jcampuzano2|1 day ago
If theres anything this admin doesn't like, its being postured against or called out by literally anyone, especially in public.
Monotoko|1 day ago
davidw|1 day ago
matsemann|1 day ago
padolsey|1 day ago
jakeydus|1 day ago
stackedinserter|6 hours ago
t0lo|1 day ago
unknown|1 day ago
[deleted]
curiousgal|1 day ago
bishop_cobb|1 day ago
[deleted]
KronisLV|1 day ago
tao_oat|1 day ago
ozgung|1 day ago
An algorithm, an ML model trained to predict next tokens to write meaningful text, is going to KILL actual humans by itself.
So killing people is legal,
Killing people by a random process is legal,
A randomized algorithm deciding on who to kill is legal,
And some of you think you are legally protected because they used the word “domestic”?
mpalmer|1 day ago
Who said that any of it is legal? Keeping in mind that when the government does something, it usually takes more than 24h for there to be an official determination on whether they broke the law.
nsvd2|1 day ago
boxedemp|1 day ago
booleandilemma|1 day ago
techpression|1 day ago
They will deploy this on a domestic scale and claim to use it to locate non-domestic threats. I can’t believe anyone is falling for this.
pbnjay|1 day ago
pbnjay|15 hours ago
tintor|1 day ago
- OpenAI is ok with use of their AI for autonomous weapons, as long as there is "human responsibility"
- Anthropic is not ok with use of their AI for autonomous weapons
IAmGraydon|1 day ago
fiatpandas|1 day ago
So there’s the difference, and an erasure of a red line. OpenAI is good with autonomous weapon systems. Requiring human responsibility isn’t saying much. Theres already military courts, rules of engagement, and international rules of war.
ttrashh|1 day ago
adangert|1 day ago
Income and revenue sources always, inevitably, and without fail, determine behavior.
aoeusnth1|1 day ago
operator_nil|1 day ago
insane_dreamer|1 day ago
and we know we can trust openAI because they were founded on "open" and "safe" AI (up until they realized how much money there was to be made, at which point their only value changed to "make money")
AbstractH24|1 day ago
Yesterday and the day before sentiment seemed to be focused on “Anthropic selling out”, then that shifted to “Anthropic holds true to its principles in a David vs Goliath” and “the industry will rally around one another for the greater good.” But suddenly we’re seeing a new narrative of “Evil OpenAI swoops in to make a deal with the devil.”
Reminds of that weekend where Sam Altman lost control of OpenAI.
deepfriedbits|1 day ago
karmasimida|1 day ago
Mad respect to Sam, now I believe OpenAI have better chance to win in the race
slibhb|1 day ago
dgxyz|1 day ago
I hope everyone goes and works for Anthropic and OpenAI collapses.
Markets going to be interesting on Monday. This plus a war. Urgh.
pu_pe|1 day ago
jordanscales|1 day ago
BoiledCabbage|1 day ago
So it wasn't about those principles making them a supply chain risk? They're just trying to punish Anthropic for being the first ones to stand firm on those principles?
hakrgrl|1 day ago
[deleted]
iainctduncan|1 day ago
weasels gonna weasel
vander_elst|1 day ago
kledru|1 day ago
Sadly it would be very difficult for Anthropic to relocate to another country with their IP, models, and infrastructure.
(Guess I need to build everything I intended this year in a weekend.)
bodobolero|1 day ago
Both are based in Europe but Proton Lumo has the better privacy promises.
Would be interested in experiences of others with those alternatives for question/answering type research (not for coding for which there exist other, better alternatives like Gemini and Claude)
xvector|1 day ago
But tbh I just switched to Anthropic, they need all the support they can get. Claude is great for question/answer.
mmanfrin|1 day ago
matsemann|1 day ago
rich_sasha|1 day ago
wmf|1 day ago
agentic_lawyer|8 hours ago
corford|1 day ago
willio58|1 day ago
The little respect I had left for Sam is now wiped. Makes me sick.
Growing up I always thought AI would be this beautiful tool, this thing that opens the gates to a new society where work becomes optional in a way. But I failed to think about human greed.
I remember following OpenAI way back when it was a non profit explaining how AI uncontrolled could be highly detrimental. Now Sam has not only taken that non profit and made it for-profit. It seems he’s making the most evil decisions he can for a buck.
Cancel your subscription, tell your friends to. And vote to heavily tax these companies and their leaders.
mythz|1 day ago
Ended up renewing my Claude sub today instead. Principled stances matter and I no longer trust OpenAI to be trustworthy custodians of my AI History.
afruitpie|1 day ago
I linked to https://notdivided.org/ as the reasoning why.
AbstractH24|1 day ago
Was shocking back then to think how far we’ve come.
adverbly|1 day ago
rrrpdx1|1 day ago
cjonas|1 day ago
fandorin|1 day ago
mrcwinn|1 day ago
IAmGraydon|1 day ago
unknown|1 day ago
[deleted]
e40|1 day ago
deadbolt|1 day ago
netsroht|1 day ago
gammarator|1 day ago
gammarator|1 day ago
wannabe_loser|1 day ago
throwaway20261|1 day ago
So by that measure the US govt can go get some Israeli software to surveill their domestic populace!
Homo sapiens deserve to become extinct.
jdiaz97|1 day ago
insane_dreamer|1 day ago
This is a red line for me. It's clear OpenAI has zero values and will give Hesgeth whatever he wants in exchange for $$$.
https://www.nytimes.com/2026/02/27/technology/openai-reaches...
imwideawake|1 day ago
Screw Sam, and screw OpenAI. I've been a customer of theirs since the first month their API opened to developers. Today I cancelled my subscription and deleted my account.
I'd already signed up for Claude Max and had been slow to cancel my OpenAI subscriptions. This finally made the decision easy.
impulser_|1 day ago
Anthropic probably made the mistake of questioning the Military's activities related to Claude after the Venezuela mission and wanted reassurance that the model wouldn't be used for the redlines, and the military didn't like this and told them we aren't using your models unless you agree to not question us and then the back and forth started.
In the end, we will probably have both OpenAI and Anthropic providing AI to the military and that's a good thing. I don't think they will keep the supply chain risk on Anthropic for more than a week.
Monotoko|1 day ago
(Person Of Interest for those who haven't seen it, watched it a decade ago and it's actually quite surprising how on point it ended up being)
xvector|1 day ago
Why? It is in the admin's interest to absolutely destroy Anthropic. Make them an example.
fabbbbb|1 day ago
hwc|20 hours ago
TeeWEE|1 day ago
bambax|1 day ago
Right. Pete "FAFO" Hegseth is a model of intelligence, moderation, and respect for due process. Nothing to see here.
levanten|1 day ago
SpaceL10n|1 day ago
taway1874|1 day ago
AmericanOP|1 day ago
unknown|1 day ago
[deleted]
elAhmo|1 day ago
kseniamorph|1 day ago
lm28469|1 day ago
The same day:
Pssst psst Samy Samy, come here we have money and data psst
> Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
superkuh|1 day ago
petee|1 day ago
interestpiqued|1 day ago
darkstarsys|1 day ago
m4rtink|1 day ago
jstummbillig|1 day ago
Under normal circumstances, that would seem really plausible. But given how far Trump continues to go just out of spite and to project power, it actually is the opposite.
I am fully prepared to believe that they got absolutely nothing else out of it (to date).
matsemann|1 day ago
vldszn|1 day ago
d--b|1 day ago
They’re willing to let their brand go to trash for this government contract.
Pretty much every American is standing with Anthropic on this. No one left or right wants mass surveillance and terminators. In fact, no one in the world wants this, except the US military.
But Altman seems so desperate to keep the cash coming he’s ready to do anything.
redml|1 day ago
kneel|1 day ago
Use it to save your data, shouldn't be hard to get it working elsewhere
straydusk|1 day ago
However, if you live in the US and pay a passing attention to our idiotic politics, you know this is right out of the Trump playbook. It goes like this:
* Make a negotiation personal
* Emotionally lash out and kill the negotiation
* Complete a worse or similar deal, with a worse or similar party
* Celebrate your worse deal as a better deal
Importantly, you must waste enormous time and resources to secure nothing of substance.
That's why I actually believe that OpenAI will meet the same bar Anthropic did, at least for now. Will they continue to, in the same way Anthropic would have? Seems unlikely, but we'll see.
voganmother42|1 day ago
ocdtrekkie|1 day ago
LarsDu88|1 day ago
This is really about the imminent strike on Iran which is now super telegraphed. They are gonna use ChatGPT for target selection, and the likely outcome is that it will fuck things up and a bunch of civilians are going to die because of this decision.
When this happens, Altman will go from being merely a drifter to having blood on his hands.
voganmother42|1 day ago
greenchair|1 day ago
coffeebeqn|1 day ago
cogman10|1 day ago
A lot of innocent people are about to be harmed because the cogs of fascism are lubricated with blood.
hnthrowaway0315|1 day ago
mkozlows|1 day ago
1. There's no substantive change. Hegseth/Trump just wanted to punish Anthropic for standing up to them, even if it didn't get them anything else today -- establishing a chilling effect for the future has some value for them in this case, after all. And OpenAI was willing to help them do that, despite earlier claiming that they stood behind Anthropic's decisions.
2. There is a substantive change. Despite Altman's words, they have a tacit understanding that OpenAI won't really enforce those terms, or that they'll allow them to be modified some time in the future when attention has moved on elsewhere.
Either way, it makes Altman look slimy, and OpenAI has aligned with Trump against Anthropic in a place where Anthropic made a correct principled stand. It's been clear for a while that Anthropic has more ethics than OpenAI, but this is more naked than any previous example.
slopinthebag|1 day ago
Just to be clear, you believe that the correct, principled stand is that it's OK to use their models for killing people and civilian surveillance?
Both OAI and Anthropic have the same moral leg to stand on here, OAI is just not hypocritical about it.
dataflow|1 day ago
> The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
(1) Well, did both sides sign the agreement and is it actually effective? Or is it still sitting on someone's desk until it can get stalled long enough?
(2) What does "agreement" even mean? Is it a legally enforceable contract, or just some sort of MoU or pinkie promise?
(3) If it's a legally enforceable contract, is it equally enforceable on all of their contracts, or just some? Do they not have existing contracts this would need to apply to?
(4) What does "reflects them in law and policy" even mean? Since when does DoW make laws, and in what sense do their laws reflect whatever the agreement was? Are these laws he can point to so everyone else can see? Can he at least copy-paste the exact sentences the government agreed to?
outside1234|1 day ago
t0lo|1 day ago
rvz|1 day ago
The ones that did might as well leave. But there was no open letter when the first military contract was signed. [1] Now there is one?
[0] https://news.ycombinator.com/item?id=47176170
[1] https://www.theguardian.com/technology/2025/jun/17/openai-mi...
mrweasel|1 day ago
owenthejumper|1 day ago
DebtDeflation|1 day ago
vorticalbox|1 day ago
so foreign mass surveillance is all good?
jaybrendansmith|1 day ago
boxedemp|1 day ago
tibbydudeza|1 day ago
otterley|1 day ago
The whole story makes no sense to me. The DoW didn’t get what they wanted, and now Anthropic is tarred and feathered.
https://www.wsj.com/tech/ai/trump-will-end-government-use-of...
“OpenAI Chief Executive Sam Altman said the company’s deal with the Defense Department includes those same prohibitions on mass surveillance and autonomous weapons, as well as technical safeguards to make sure the models behave as they should.”
midnitewarrior|1 day ago
unknown|1 day ago
[deleted]
verdverm|1 day ago
I always assumed those folks need a way to look strong with their base for a media moment over equitable application of the policies or law.
unknown|1 day ago
[deleted]
robertwt7|1 day ago
arendtio|1 day ago
On the surface, it looks like both rejected 'domestic mass surveillance' and 'autonomous weapon systems', but there seem to be important differences in the fine print, since one company is being labeled a 'supply chain risk' while the other 'reached the patriotic and correct answer'.
One explanation would be that the DoW changed its demands, but I doubt that. Instead, I believe OpenAI found a loophole that allows those cases under certain conditions.