top | item 47189650

OpenAI agrees with Dept. of War to deploy models in their classified network

1382 points| eoskx | 1 day ago |twitter.com

https://xcancel.com/sama/status/2027578652477821175

https://fortune.com/2026/02/27/openai-in-talks-with-pentagon...

644 comments

order

Some comments were deferred for faster rendering.

Imnimo|1 day ago

I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment there in light of this. Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement. The only plausible explanation is that there is an understanding that OpenAI will not, in practice, enforce the red lines.

tedsanders|1 day ago

I'm an OpenAI employee and I'll go out on a limb with a public comment. I agree AI shouldn't be used for mass surveillance or autonomous weapons. I also think Anthropic has been treated terribly and has acted admirably. My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons, and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples). Given this understanding, I don't see why I should quit. If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit, but so far I haven't seen any evidence that's the case.

zingerlio|1 day ago

OpenAI employees put knives on their own necks to demand Altman to get back and be their boss [1], not too long ago, right? Altman wiggles his tongues and makes them a solid paycheck. "We will not be divided," unless the water boils slow enough. Wait for a few months, he will renegotiate the terms with DoD, just like his move to turn OpenAI into a for-profit.

[1]: https://www.wired.com/story/openai-staff-walk-protest-sam-al...

tempaccount420|1 day ago

Didn't the safety-conscious employees already leave when OpenAI fired Sam Altman and then re-hired him?

In my mind the only people left are those who are there for the stocks.

arugulum|1 day ago

> Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement.

But they did.

"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."

2snakes|1 day ago

I think it is like a loyalty test to an authority above the law (executive immunity) in order to do business. “If we tell you to do so, you may do something you thought was right or wrong.” It is like an induction into a faction and the way the decisions could be made. Doesn’t necessarily mean anything about “in practice in the future”, just that the cybernetic override is there tacitly. If the authority thinks they can get away with something, they will provide protection for consequences too. Some people more equal than others when it comes to justice for all, etc. There are probably alternative styles for group decision making…

weatherlite|1 day ago

> I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment there in light of this

Well some may voluntarily leave, some will be actively poached by Anthropic perhaps and some I suppose will stay in their jobs because leaving isn't an easy decision to make.

coliveira|1 day ago

Yes, what is implied in this episode is that all big companies that do AI development or provide computing for Ai are now signing for these very shady uses of their technologies.

granzymes|1 day ago

>Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement.

Have we been watching the same Trump admin for the last year? That sound exactly like something the government would do: pointlessly throw a fit and end up signing a worse deal after blowing up all political capital.

ecocentrik|1 day ago

I think the problem might actually be with reenforcing the red lines. The events of the last few weeks and this new deal only make sense if Anthropic was trying to find out how Palantir and the Pentagon had circumvented their restrictions to attempt to reenforce those restrictions like company actually concerned about the misuse of their product. OpenAI most likely came in with assurances that they wouldn't attempt to reinforce their restrictions.

ivan_gammel|1 day ago

Another plausible explanation that is familiar to a lot of people in other countries is banal corruption. Kick out one competitor on bogus allegations, then on the next day invite another one… what else that could be?

crowcroft|2 hours ago

It is difficult to get a man to understand something, when his salary depends on his not understanding it.

blueblisters|1 day ago

My knee-jerk reaction to this was looks like an opportunistic maneuver that Sam is known for and I'm considering canceling my subscriptions and business with OpenAI

But what's the most charitable / objective interpretation of this?

For example - https://x.com/UnderSecretaryF/status/2027594072811098230

Does it suggest that determination of "lawful use" and Dario's concerns falls upon the government, not the AI provider?

Other folks have claimed that Anthropic planned to burn the contentious redlines into Claude's constitution.

Update: I have cancelled my subscriptions until OpenAI clarifies the situation. From an alignment perspective Anthropic's stand seems like the correct long-term approach. And at least some AI researchers appear to agree.

cedws|1 day ago

I think Altman probably rationalised it to himself by thinking that if he doesn’t do it, Musk/xAI will, and they give zero fucks about safety. So maybe he told himself that it’s better if OpenAI does it.

Analemma_|1 day ago

As people have repeatedly mentioned, if the War Department was unhappy with Anthropic's terms, they could have refused to sign the contract. But they didn't: they were fine with it for over a year. And if they changed their mind, they could've ended the contract and both sides could've walked away. Anthropic said that would've been fine. But that's not what happened either: they threatened Anthropic with both SCR designation and a DPA takeover if Anthropic didn't agree to unilateral renegotiation of terms that the War Department had already agreed were fine.

It's absurd, and doubly so if OAI's deal includes the same or even similar redlines to what Anthropic had.

manmal|1 day ago

Unless you're using an enterprise plan or pay per token, you're not hurting their business at all by cancelling. The consumer plans are heavily subsidised.

gabeh|1 day ago

It's only $200 from me for the remainder of the year but you're not getting it anymore OpenAI. Voting with my wallet tonight. Really sad, I've followed OpenAI for years, way before ChatGPT. It's just too hard to true up my values with how they've behaved recently. This sucks. Goodnight everyone.

unfunco|1 day ago

I cancelled and deleted my account and I got an email immediately with a pro-rata refund. You can get that money back.

user0648|1 day ago

Same. Moving to Anthropic. At some point we can’t let the slide continue

jobs_throwaway|1 day ago

Just cancelled my Plus plan as well. I will still wait to see how things play out before deciding if I'll delete my account altogether, but OpenAI's actions simply don't align with my values at the moment. Very disappointing.

tim333|1 day ago

It's been kind of downhill since the 2023 Altman firing and rehiring.

ukblewis|1 day ago

[deleted]

quantumwannabe|1 day ago

More details on the difference between the OpenAI and Anthropic contracts from one of the Under Secretaries of State:

>The axios article doesn’t have much detail and this is DoW’s decision, not mine. But if the contract defines the guardrails with reference to legal constraints (e.g. mass surveillance in contravention of specific authorities) rather than based on the purely subjective conditions included in Anthropic’s TOS, then yes. This, btw, was a compromise offered to—and rejected by—Anthropic.

https://x.com/UnderSecretaryF/status/2027566426970530135

> For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms. This, again, is a compromise that Anthropic was offered, and rejected.

> Even if the substantive issues are the same there is a huge difference between (1) memorializing specific safety concerns by reference to particular legal and policy authorities, which are products of our constitutional and political system, and (2) insisting upon a set of prudential constraints subject to the interpretation of a private company and CEO. As we have been saying, the question is fundamental—who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.

> It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here

https://x.com/UnderSecretaryF/status/2027594072811098230

toraway|1 day ago

  It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here
He's an administration official openly cheerleading his team. This should be characterized as the insider perspective/spin, not a neutral analysis of the relevant facts.

MostlyStable|1 day ago

Even this most-charitable-possible (to DoW) explanation does not even come close to justifying the supply chain risk designation. It is absolutely enough (and honestly more than enough) for a contract cancellation and a switch to a competitor. DoW could have done that for any reason at all, or no reason at all. If they had issues with Anthropics terms, they 100% should have done that.

Nothing in the quoted text comes anywhere close to the realm of justifying the retaliatory actions.

advisedwang|1 day ago

A government promise that they'll only do lawful things is not reassuring at all:

1. We've seen government lawyers write memos explaining why such-and-such obviously illegal act is legal (see: torture memo). Until challenged, this is basically law.

2. We've seen government change the law to make whatever they want legal (see: patriot act)

3. We've seen courts just interpret laws to make things legal

A contractor doesn't realistically have the power to push back against any of these avenues if they agree to allow anything legal.

(At the risk of triggering Godwin's Law, remember that for the most part the Holocaust was entirely legal - the Nazi's established the necessary authorization. Just to illustrate that when it comes to certain government crimes, the law alone is an insufficient shield.)

makeramen|1 day ago

The DoW wants to only be beholden to the laws, and not to Anthropics TOS.

So the question is: do you trust the government to effectively govern its own use of AI? or do you trust Anthropic's enforcement of its TOS?

ignoramous|1 day ago

> More details on the difference...

Does the qualifier "domestic" for mass surveillance mean that OpenAI allows the use of its models for whatever isn't "domestic"?

  ... Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force ...

ajkjk|1 day ago

how I wish that "patriotic" meant something instead of just "did what we wanted". I'm so tired of living in an era where every communication made by every organization feels like a lie

SpicyLemonZest|1 day ago

You're quoting social media posts from a regime official who says he didn't participate in these negotiations and doesn't work for the relevant department.

If his characterization of the agreement is correct, which I will not believe and you should not believe until a trustworthy news outlet publishes the text, I suppose this would convince me that Hegseth does not literally plan to build a Terminator for democracy-ending purposes. There's a lot of inexcusable stuff here regardless, but perhaps merely boycotting OpenAI and the US military would be a sufficient response if this all checks out.

cube00|1 day ago

If the redlines are the same how'd this deal get struck?

ChatGPT maker OpenAI has the same redlines as Anthropic when it comes to working with the Pentagon, an OpenAI spokesperson confirmed to CNN.

https://edition.cnn.com/2026/02/27/tech/openai-has-same-redl...

slim|1 day ago

Look more carefully at what sam altman satd : he did not say he won't remove technical safeguards against surveilance and autonomous killing, instead he said "We also will build technical safeguards to ensure our models behave as they should"

skybrian|1 day ago

You're expecting logic from the Trump administration and that's not really how they do things. Maybe it was never about the redlines? Maybe they decided Anthropic was their enemy, and that was their excuse.

spongebobstoes|1 day ago

deals are based on personal relationships, not abstract logic

spprashant|1 day ago

Just uninstalled the app and canceled subscription. OpenAI can't justify their insane valuation without an user base. Especially when there are capable models elsewhere.

deaux|1 day ago

All OpenAI employees during the board revolt that vouched for sama's return are personally responsible.

swat535|1 day ago

OpenAI employees revolted for their millions worth of stock, not for principle.

Anyone thinking they have any virtue is naive.

push0ret|1 day ago

So they agreed to the same red lines that had earlier led to the fallout with Anthropic? Kind of strange.

arppacket|1 day ago

I bet Sam secretly pledged to DoD that the red lines were only temporary, for optics and to calm employees at the all hands meeting.

A few months down the line, OpenAI will quietly decide that their next model is safe enough for autonomous weapons, and remove their safeguard layer. The mass surveillance enablement might be an indirect deal through Palantir.

yoyohello13|1 day ago

Sam saw Anthropic was getting too competitive. So he called his buddies in the gov to knock them down a peg.

harmonic18374|1 day ago

I don't trust Sam to be telling the truth. It would be to his benefit to lie about this and make Anthropic look bad, so he of course would, even if it's not actually the case.

fintechie|1 day ago

Well you know how it goes... you need to read between the lines. I can agree with you on your "principles", but not enforce them myself.

fwlr|1 day ago

It makes sense if you imagine the real motivation is “make sure the AI contracts go to my good friend Sam”, and all the red line stuff is just a way to pick a fight with Anthropic.

foobarqux|1 day ago

No, the difference is that the government agrees to no "unlawful" use as determined by the government.

Anthropic said that mass surveillance was per se prohibited even if the government self-certified that it was lawful.

Jcampuzano2|1 day ago

I would put bets on the issue probably being that it was pointed out that Anthropic's models were used to assist the raid in Venezuela, Anthropic then aggressively doubled down on their rules/principles and the DOD didn't like being called out on that so they lashed out, hard.

If theres anything this admin doesn't like, its being postured against or called out by literally anyone, especially in public.

Monotoko|1 day ago

I don't even think Anthropic balked at being used to assist, as long as a human has the final say.

davidw|1 day ago

We need some kind of group like "tech people with morals". I'm done with these people and their corruption and garbage.

matsemann|1 day ago

It's why I think "software engineer" is a misnomer. We don't have a license, we don't have an ethics code, we don't sign off on stuff. In other disciplines, an engineer could topple a project they feel is unsafe or against code, and be backed by their union if replaced. A software engineer just says yes if their stocks aren't vested, and will be replaced if not.

padolsey|1 day ago

Not a group per se but I maintain an index of 'good' people in tech here, and their contraries - https://goodindex.org

jakeydus|1 day ago

A union?

t0lo|1 day ago

Yeah some new banner to organise around- the hard part is easily communicating you're an ethical technologist and finding others.

curiousgal|1 day ago

This, honestly. Seeing all those billionaires on inauguration day lined up to kiss the ring was utterly pathetic. Like what is the fucking point of having billions of dollars if you're just going to be someone else's bitch. And for what? A couple more billion dollars. Oof

KronisLV|1 day ago

In an imaginary world, this would be a precursor to Anthropic coming to EU in a greater capacity and teaming up with Mistral, eventually leading to similar innovation and progress that DeepSeek forced upon the West, benefitting everyone in the long run. They seem to have the morals for it and the respect for human rights and life given their recent announcement (after some backtracking), unlike OpenAI. Sadly, that's not the real world.

tao_oat|1 day ago

I'd apply to work for Anthropic in a heartbeat if it was a European company.

ozgung|1 day ago

Do I understand this correctly:

An algorithm, an ML model trained to predict next tokens to write meaningful text, is going to KILL actual humans by itself.

So killing people is legal,

Killing people by a random process is legal,

A randomized algorithm deciding on who to kill is legal,

And some of you think you are legally protected because they used the word “domestic”?

mpalmer|1 day ago

I don't think you do?

Who said that any of it is legal? Keeping in mind that when the government does something, it usually takes more than 24h for there to be an official determination on whether they broke the law.

nsvd2|1 day ago

Well, DHS has shown this year that they won't show restraint when it comes to attacking American citizens. I would expect that trend to continue.

boxedemp|1 day ago

Killing people has been legal forever. But you have to do it at scale for it to be legal.

booleandilemma|1 day ago

Is it possible the killing machine could hallucinate and kill some random, innocent person?

techpression|1 day ago

Domestic means nothing, it’s like the company Daniel Ek invested in saying they won’t sell weapons to ”Democracies”, in the context of warfare and control these words are meaningless.

They will deploy this on a domestic scale and claim to use it to locate non-domestic threats. I can’t believe anyone is falling for this.

pbnjay|1 day ago

I had kept my Plus subscription just because I was lazy, and it was inexpensive and convenient… but this turn definitely helped me get off the fence. I am exporting and deleting my data now, and the cancellation is already done.

pbnjay|15 hours ago

For anyone reading this later, I got my data export and deleted my account and got a pro-rated refund. So double whammy!

tintor|1 day ago

Difference from Anthropic's deal is:

- OpenAI is ok with use of their AI for autonomous weapons, as long as there is "human responsibility"

- Anthropic is not ok with use of their AI for autonomous weapons

IAmGraydon|1 day ago

I think you guys are giving far too much attention to the "autonomous weapons" angle and not enough to the "spying on Americans" angle. It makes no sense to use an LLM to power an autonomous weapon. It does make a lot of sense to use an LLM to monitor communications and public social media profiles to create a list of "domestic terrorists" that they can then target. I'm willing to bet this is what the administration wanted to use Anthropic for.

fiatpandas|1 day ago

>human responsibility for the use of force, including for autonomous weapon systems

So there’s the difference, and an erasure of a red line. OpenAI is good with autonomous weapon systems. Requiring human responsibility isn’t saying much. Theres already military courts, rules of engagement, and international rules of war.

ttrashh|1 day ago

Cancel your subscription. It's the least you can do.

adangert|1 day ago

Let me reiterate some points for people here:

Income and revenue sources always, inevitably, and without fail, determine behavior.

aoeusnth1|1 day ago

I think your theory might be missing an extremely relevant and timely counterexample?

operator_nil|1 day ago

So does this mean that OpenAI will give whatever the DoD asks for and they will pinky swear that it won’t be used for mass surveillance and autonomous killing machines?

insane_dreamer|1 day ago

yes

and we know we can trust openAI because they were founded on "open" and "safe" AI (up until they realized how much money there was to be made, at which point their only value changed to "make money")

AbstractH24|1 day ago

It’s amazing how quickly the players keep shifting here.

Yesterday and the day before sentiment seemed to be focused on “Anthropic selling out”, then that shifted to “Anthropic holds true to its principles in a David vs Goliath” and “the industry will rally around one another for the greater good.” But suddenly we’re seeing a new narrative of “Evil OpenAI swoops in to make a deal with the devil.”

Reminds of that weekend where Sam Altman lost control of OpenAI.

deepfriedbits|1 day ago

"There are decades where nothing happens and weeks where decades happen."

karmasimida|1 day ago

Sam is a player and honestly the more interesting one in the whole thing.

Mad respect to Sam, now I believe OpenAI have better chance to win in the race

slibhb|1 day ago

I'm unsure how to feel about this whole dust-up. It doesn't seem like much has changed in substance. Maybe OpenAI outmaneuvered Anthropic behind the scenes. Possibly Anthropic was seen as not behaving deferentially enough towards the government. But this administration has proven comically corrupt, so it wouldn't surprise me if money was involved. Will be interested to see what journalists turn up.

dgxyz|1 day ago

Sam Altman being a complete bell end? Who'd have thought it.

I hope everyone goes and works for Anthropic and OpenAI collapses.

Markets going to be interesting on Monday. This plus a war. Urgh.

pu_pe|1 day ago

So this week we've learned that even the government asseses Anthropic has the better model, and that OpenAI leadership has no concern for safety whatsoever.

jordanscales|1 day ago

BoiledCabbage|1 day ago

So they agreed to the exact same clauses that Anthropic put forward but with OpenAI instead?

So it wasn't about those principles making them a supply chain risk? They're just trying to punish Anthropic for being the first ones to stand firm on those principles?

iainctduncan|1 day ago

Did anyone ever doubt sama would just follow the money?

weasels gonna weasel

vander_elst|1 day ago

Subscribers should be aware what they are supporting. I think that keeping an OpenAI account can be considered an active support of this decision, at least for private people who can easily change providers.

kledru|1 day ago

Sorry, despite the public statements of some sort of solidarity with Anthropic by sama this looks like a plot to take over from losing position.

Sadly it would be very difficult for Anthropic to relocate to another country with their IP, models, and infrastructure.

(Guess I need to build everything I intended this year in a weekend.)

bodobolero|1 day ago

I canceled my ChatGPT subscription and switched to Lumo Plus subscription https://lumo.proton.me/about I also considered https://mistral.ai/products/le-chat

Both are based in Europe but Proton Lumo has the better privacy promises.

Would be interested in experiences of others with those alternatives for question/answering type research (not for coding for which there exist other, better alternatives like Gemini and Claude)

xvector|1 day ago

If you want privacy the only option that reasonably delivers is Moxie's https://confer.to

But tbh I just switched to Anthropic, they need all the support they can get. Claude is great for question/answer.

mmanfrin|1 day ago

Absolute disgrace of a person and organization.

matsemann|1 day ago

From an open non-profit to a war machine in such a short time is baffling.

rich_sasha|1 day ago

Is the Pentagon signing a EULA confirming all their data will now be used, anonymised, for improving the service?

wmf|1 day ago

Obviously not? You know enterprise customers don't have the same EULA as consumers, right?

agentic_lawyer|8 hours ago

From the man who founded Worldcoin and built a grandiose vision of having the biometrics of every human in the developing world saved on his own corporate database: America - you're next.

corford|1 day ago

If you're unhappy with this, an immediate way to signal it is with your wallet. In my case I've just uninstalled chatgpt from my phone, cancelled my subscription and will up my spend with anthropic.

willio58|1 day ago

Thanks for the reminder. Doing the same now.

The little respect I had left for Sam is now wiped. Makes me sick.

Growing up I always thought AI would be this beautiful tool, this thing that opens the gates to a new society where work becomes optional in a way. But I failed to think about human greed.

I remember following OpenAI way back when it was a non profit explaining how AI uncontrolled could be highly detrimental. Now Sam has not only taken that non profit and made it for-profit. It seems he’s making the most evil decisions he can for a buck.

Cancel your subscription, tell your friends to. And vote to heavily tax these companies and their leaders.

mythz|1 day ago

Perfect timing - Had already cancelled my Claude sub over their OAuth ban in external tools and was about to pick up a Codex sub as the next best alternative.

Ended up renewing my Claude sub today instead. Principled stances matter and I no longer trust OpenAI to be trustworthy custodians of my AI History.

afruitpie|1 day ago

Just canceled my subscription! I immediately received an email with the subject “We’d love your feedback on why you canceled your ChatGPT plus subscription” and a link to a survey.

I linked to https://notdivided.org/ as the reasoning why.

AbstractH24|1 day ago

I’d like to say I did that but I already canceled my subscription 4 months ago in favor of Claude and Gemini based purely on product quality.

Was shocking back then to think how far we’ve come.

adverbly|1 day ago

Deleted all chats and deleted my account.

rrrpdx1|1 day ago

Totally agree. Signed up for a claude code account and will not give OpenAI any money in the future. Let's see what Google does. I will definitely vote with my wallet.

cjonas|1 day ago

Thanks for reminding me. Been meaning to cancel for months.

fandorin|1 day ago

Same here. Removed my account, deleted the app.

mrcwinn|1 day ago

I canceled my subscription, wiped my history, closed my account, deleted the app. Using Claude Max.

IAmGraydon|1 day ago

Yep I’m pulling the plug on my OIA account on Monday morning and switching to Anthropic.

e40|1 day ago

This is how OpenAI gets bailed out in an AI crash, too big to fail becomes too important to fail.

deadbolt|1 day ago

Choosing to go along with calling it the "Department of War" tells you all you need to know.

netsroht|1 day ago

Remember when openai was too afraid to release the full GPT-2 model (this one had only 1.5B params) because humanity apparently wasn't ready for it. Look where we are just a couple of years later. I really admired them back in the day for openai gym and PPO etc.

wannabe_loser|1 day ago

I guess we aren't curing cancer with ai anymore

throwaway20261|1 day ago

It is quite shocking that almost all AI companies are saying "we are not ok with domestic surveillance" but they'll happily sign up to surveilling the rest of the world population.

So by that measure the US govt can go get some Israeli software to surveill their domestic populace!

Homo sapiens deserve to become extinct.

jdiaz97|1 day ago

cancelling my openai subscription, they're gonna miss my 20 USD

imwideawake|1 day ago

Google, OpenAI, and Anthropic should all have each other's backs when it comes to hard lines like this. Sam can say whatever he wants, but signing this deal on the same day Trump and Hegseth went scorched earth on Anthropic — for standing up for the very values OpenAI claims to hold — is sleazy.

Screw Sam, and screw OpenAI. I've been a customer of theirs since the first month their API opened to developers. Today I cancelled my subscription and deleted my account.

I'd already signed up for Claude Max and had been slow to cancel my OpenAI subscriptions. This finally made the decision easy.

impulser_|1 day ago

For the people that don't understand how they got a deal with the same redlines, it probably because OpenAI agreed to not question them. The safeguards are there, both parties agree now fuck off and let us use your model how we see fit.

Anthropic probably made the mistake of questioning the Military's activities related to Claude after the Venezuela mission and wanted reassurance that the model wouldn't be used for the redlines, and the military didn't like this and told them we aren't using your models unless you agree to not question us and then the back and forth started.

In the end, we will probably have both OpenAI and Anthropic providing AI to the military and that's a good thing. I don't think they will keep the supply chain risk on Anthropic for more than a week.

Monotoko|1 day ago

Anthropic vs OpenAI will probably be The Machine vs Samaritan

(Person Of Interest for those who haven't seen it, watched it a decade ago and it's actually quite surprising how on point it ended up being)

xvector|1 day ago

> I don't think they will keep the supply chain risk on Anthropic for more than a week.

Why? It is in the admin's interest to absolutely destroy Anthropic. Make them an example.

fabbbbb|1 day ago

Anyone having success with exporting data from ChatGPT? Got the export email 11 hours ago but still no download link..

hwc|20 hours ago

Please note that the US Congress has not approved the existence of a "Department of War" this century. Calling it the Department of Defense is the only legally correct way to refer to it.

TeeWEE|1 day ago

If you work at OpenAI, leave now while you can.

bambax|1 day ago

> In all of our interactions, the DoW displayed a deep respect for safety

Right. Pete "FAFO" Hegseth is a model of intelligence, moderation, and respect for due process. Nothing to see here.

levanten|1 day ago

Funny that these are the same people that have been blasting the alarm on dangers of AI singularity. Now they cannot wait to put their tools in weapons.

SpaceL10n|1 day ago

Does deploying these models in "the classified network" also mean this technology is going to be used to help kill people?

taway1874|1 day ago

Well ... bumped up my Claude subscription from Pro to Max and closed out my OpenAI accounts. It's a drop in the ocean but I'll sleep better knowing I did the right thing. Thanks ChatGPT! It was good knowing you.

elAhmo|1 day ago

All that money and not a single ounce of integrity.

kseniamorph|1 day ago

Is there anyone who really understands what’s different about the OpenAI agreement? Or maybe these are just Sam Altman’s public statements that don’t actually reflect the real terms of the deal. I honestly can’t figure it out.

lm28469|1 day ago

> OpenAI CEO Sam Altman shares Anthropic’s concerns when it comes to working with the Pentagon

The same day:

Pssst psst Samy Samy, come here we have money and data psst

> Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.

superkuh|1 day ago

I have just canceled all services and deleted my account with OpenAI. They can get money from the current US regime but I will not contribute to their violations of the constitution.

petee|1 day ago

This explains the "Free Codex" offer i just got in my email

darkstarsys|1 day ago

All of this, the news articles, the social media discussion, this very discussion, will be part of the training set for future AIs. What will they learn from this?

m4rtink|1 day ago

So this is indeed how OpenAI survives (a little bit longer ?) - government bailout.

jstummbillig|1 day ago

> Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement.

Under normal circumstances, that would seem really plausible. But given how far Trump continues to go just out of spite and to project power, it actually is the opposite.

I am fully prepared to believe that they got absolutely nothing else out of it (to date).

matsemann|1 day ago

OpenAI was the biggest donor ($25 millions) to Trumps campaign. This is them getting their back scratched in return.

d--b|1 day ago

At this stage, everything OpenAi does is to try to keep investors investing.

They’re willing to let their brand go to trash for this government contract.

Pretty much every American is standing with Anthropic on this. No one left or right wants mass surveillance and terminators. In fact, no one in the world wants this, except the US military.

But Altman seems so desperate to keep the cash coming he’s ready to do anything.

redml|1 day ago

regardless of your opinion of ai in government, sam could not have picked a worse way for optics to swoop in and make a deal. it just looks incredibly bad.

kneel|1 day ago

Chatgpt has an export function for all of your chats

Use it to save your data, shouldn't be hard to get it working elsewhere

straydusk|1 day ago

I know the reaction to this, if you're a rational observer, is "OpenAI have cut corners or made concessions that Anthropic did not, that's the only thing that makes sense."

However, if you live in the US and pay a passing attention to our idiotic politics, you know this is right out of the Trump playbook. It goes like this:

* Make a negotiation personal

* Emotionally lash out and kill the negotiation

* Complete a worse or similar deal, with a worse or similar party

* Celebrate your worse deal as a better deal

Importantly, you must waste enormous time and resources to secure nothing of substance.

That's why I actually believe that OpenAI will meet the same bar Anthropic did, at least for now. Will they continue to, in the same way Anthropic would have? Seems unlikely, but we'll see.

voganmother42|1 day ago

If you support openai, you support this admin, simple as

ocdtrekkie|1 day ago

Another good question: If OpenAI knew Anthropic wasn't a competitor... was the price higher? Will the federal government also pay more for a worse product?

LarsDu88|1 day ago

China has evacuated its embassies in Iran.

This is really about the imminent strike on Iran which is now super telegraphed. They are gonna use ChatGPT for target selection, and the likely outcome is that it will fuck things up and a bunch of civilians are going to die because of this decision.

When this happens, Altman will go from being merely a drifter to having blood on his hands.

greenchair|1 day ago

speaking of blood on their hands, they are fighting multiple lawsuits related to suicide advice chats.

coffeebeqn|1 day ago

Why would they use chatgpt for target selection?

cogman10|1 day ago

Iran, Cuba, and to classify people as "Antifa".

A lot of innocent people are about to be harmed because the cogs of fascism are lubricated with blood.

hnthrowaway0315|1 day ago

Ah, is it the time when Skynet starts to manifest itself...

mkozlows|1 day ago

So there are two possibilities here:

1. There's no substantive change. Hegseth/Trump just wanted to punish Anthropic for standing up to them, even if it didn't get them anything else today -- establishing a chilling effect for the future has some value for them in this case, after all. And OpenAI was willing to help them do that, despite earlier claiming that they stood behind Anthropic's decisions.

2. There is a substantive change. Despite Altman's words, they have a tacit understanding that OpenAI won't really enforce those terms, or that they'll allow them to be modified some time in the future when attention has moved on elsewhere.

Either way, it makes Altman look slimy, and OpenAI has aligned with Trump against Anthropic in a place where Anthropic made a correct principled stand. It's been clear for a while that Anthropic has more ethics than OpenAI, but this is more naked than any previous example.

slopinthebag|1 day ago

> OpenAI has aligned with Trump against Anthropic in a place where Anthropic made a correct principled stand.

Just to be clear, you believe that the correct, principled stand is that it's OK to use their models for killing people and civilian surveillance?

Both OAI and Anthropic have the same moral leg to stand on here, OAI is just not hypocritical about it.

dataflow|1 day ago

This seems full of loopholes.

> The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.

(1) Well, did both sides sign the agreement and is it actually effective? Or is it still sitting on someone's desk until it can get stalled long enough?

(2) What does "agreement" even mean? Is it a legally enforceable contract, or just some sort of MoU or pinkie promise?

(3) If it's a legally enforceable contract, is it equally enforceable on all of their contracts, or just some? Do they not have existing contracts this would need to apply to?

(4) What does "reflects them in law and policy" even mean? Since when does DoW make laws, and in what sense do their laws reflect whatever the agreement was? Are these laws he can point to so everyone else can see? Can he at least copy-paste the exact sentences the government agreed to?

outside1234|1 day ago

Screw OpenAI. Never opening that app again or using one of their models.

t0lo|1 day ago

Snakes- as predicted

mrweasel|1 day ago

Didn't the department of war announce that it would be working with xAI just this past December?

owenthejumper|1 day ago

Well in the end this is great news - this virtually guarantees Anthropic win in the court.

DebtDeflation|1 day ago

At this point it seems the entire AI Safety/Ethics debate was nothing more than a Marketing campaign to hype up the capabilities of the models - get people to think that if they're potentially dangerous that must mean they're so capable and they need to sign up for a subscription.

vorticalbox|1 day ago

> prohibitions on domestic mass surveillance

so foreign mass surveillance is all good?

jaybrendansmith|1 day ago

What part of "These people are fascists, and need to be stopped" are people failing to understand?

boxedemp|1 day ago

The part about money

tibbydudeza|1 day ago

While Dario is not my hero with the sometimes the outrageous things he says he has a firm moral compass and a backbone that aligns with mine and thus I will support his company and their products in my personal use and my work.

otterley|1 day ago

The stories I’ve been reading say that the DoW’s agreement with OpenAI contain the very same limitations as the agreement with Anthropic did. In other words, they pressured Anthropic to eliminate those restrictions, Anthropic declined, then they made a huge fuss calling them “a radical left, woke company,” put them on the supply-chain risk list, then went with OpenAI even though OpenAI isn’t changing anything either.

The whole story makes no sense to me. The DoW didn’t get what they wanted, and now Anthropic is tarred and feathered.

https://www.wsj.com/tech/ai/trump-will-end-government-use-of...

“OpenAI Chief Executive Sam Altman said the company’s deal with the Defense Department includes those same prohibitions on mass surveillance and autonomous weapons, as well as technical safeguards to make sure the models behave as they should.”

midnitewarrior|1 day ago

Opportunism without principles at its finest.

verdverm|1 day ago

If the "safety stack" (guardrails) bit is true, it's the exact opposite of their beef with Anthropic... which is not surprising given who's running the US right now.

I always assumed those folks need a way to look strong with their base for a media moment over equitable application of the policies or law.

robertwt7|1 day ago

How did they agree to the terms that were initially put forward by Anthropic but with OpenAI? Surely there’s a catch here. Or is it just Sam negotiation skill?

arendtio|1 day ago

So now we are waiting for Anthropic to explain to us what Sam agreed to and what they rejected.

On the surface, it looks like both rejected 'domestic mass surveillance' and 'autonomous weapon systems', but there seem to be important differences in the fine print, since one company is being labeled a 'supply chain risk' while the other 'reached the patriotic and correct answer'.

One explanation would be that the DoW changed its demands, but I doubt that. Instead, I believe OpenAI found a loophole that allows those cases under certain conditions.