(no title)
Imnimo
|
2 days ago
I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment there in light of this. Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement. The only plausible explanation is that there is an understanding that OpenAI will not, in practice, enforce the red lines.
tedsanders|1 day ago
baconner|1 day ago
tfehring|1 day ago
I have two qualms with this deal.
First, Sam's tweet [0] reads as if this deal does not disallow autonomous weapons, but rather requires "human responsibility" for them. I don't think this is much of an assurance at all - obviously at some level a human must be responsible, but this is vague enough that I worry the responsible human could be very far out of the loop.
Second, Jeremy Lewin's tweet [1] indicates that the definitions of these guardrails are now maintained by DoW, not OpenAI. I'm currently unclear on those definitions and the process for changing them. But I worry that e.g. "mass surveillance" may be defined too narrowly for that limitation to be compatible with democratic values, or that DoW could unilaterally make it that narrow in the future. Evidently Anthropic insisted on defining these limits itself, and that was a sticking point.
Of course, it's possible that OpenAI leadership thoughtfully considered both of these points and that there are reasonable explanations for each of them. That's not clear from anything I've seen so far, but things are moving quickly so that may change in the coming days.
[0] https://x.com/sama/status/2027578652477821175
[1] https://x.com/UnderSecretaryF/status/2027594072811098230
ChadNauseam|1 day ago
I don't want to overanalyze things but I also noticed his statement didn't say "our agreement specifically says chatgpt will never be used for fully autonomous weapons or domestic mass surveillance." It said something that kind of gestured towards that, but it didn't quite come out and say it. It says "The DoW agrees with these principles, and we put them in our agreement." Could the principles have been outlined in a nonbinding preamble, or been a statement of the DoW's current intentions rather than binding their future behavior? You should be very suspicious when a corporate person says something vague that somewhat implies what you want to hear - if they could have told you explicitly what you wanted to hear, they would have.
But anyway, it doesn't matter. You said you don't think it should be used for autonomous weapons. I'd be willing to bet you 10:1 that you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons", now or any point in the future.
retsibsi|1 day ago
In that case, what on earth just happened?
The government was so intent on amending the Anthropic deal to allow 'all lawful use', at the government's sole discretion, that it is now pretty much trying to destroy Anthropic in retaliation for refusing this. Now, almost immediately, the government has entered into a deal with OpenAI that apparently disallows the two use cases that were the main sticking points for Anthropic.
Do you not see something very, very wrong with this picture?
At the very least, OpenAI is clearly signaling to the government that it can steamroll OpenAI on these issues whenever it wants to. Or do you believe OpenAI will stand firm, even having seen what happened to Anthropic (and immediately moved in to profit from it)?
> and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples)
If OpenAI leadership sincerely wanted this, they just squandered the best chance they could ever have had to make it happen! Actual solidarity with Anthropic could have had a huge impact.
throwawaywd89e|1 day ago
_heimdall|1 day ago
Today it can't be used for mass surveillance, but the executive branch has all the authority it needs to later deem that lawful if it wishes to, the Patriot Act and others see to that.
Anthropic was making the limits contractually explicit, meaning the executive branch could change the line of lawfulness and still couldn't use Anthropic models for mass surveillance. That is where they got into a fight and that is where OpenAI and others can claim today that they still got the same agreement Anthropic wanted.
mattalex|1 day ago
The two things anthropic refused to do is mass surveillance and autonomous weapons, so why do _you_ think openai refused and still did not get placed on the exact same list.
It's fine to say "I'm not going to resign. I didn't even sign that letter", but thinking that openai can get away with not developing autonomous weapons or mass surveillance is naive at the very best.
pear01|1 day ago
You, and your colleagues, should resign.
scarmig|1 day ago
jacquesm|1 day ago
So, can you please draw the line when you will quit?
- If OpenAI deal allows domestic mass surveillance - If OpenAI allows the development of autonomous weapons - OpenAI no longer asks for the same terms for other AI companies
Correct?
If so, then if I take your words at face value:
- By your reading non-domestic mass surveillance is fine
- The development of AI based weapons is fine as long as there is one human element in there, even if it could be disabled and then the weapon would work without humans involved
- The day that OpenAI asks for the same terms for other AI companies and if those terms are not granted then that's also fine, because after all, they did ask.
I have become extremely skeptical when seeing people whose livelihood depends on a particular legal entity come out with precise wording around what does and does not constitute their red line but I find it fascinating nonetheless so if you could humor me and clarify I'd be most obliged.
rancar2|1 day ago
phs318u|1 day ago
chasd00|1 day ago
Edit: I don’t work at OpenAI or in any AI business and my neck is on the chopping block if AI succeeds… like a lot of us. Don’t vilify this guy trying to do what’s right for him given the information he has.
syllogism|1 day ago
It doesn't even matter if OpenAI is offered the same terms that Anthropic refused. It's absurd to accept them and do business with the Pentagon in that situation.
If you take the government at its word, it's killing Anthropic because Anthropic wanted to assert the ability to draw _some_ sort of redline. If OpenAI's position is "well sucks to be them", there's nothing stopping Hegseth from doing the same to OpenAI.
It doesn't matter at all if OpenAI gets the deal at the same redline Anthropic was trying to assert. If at the end of this the government has succeeded in cutting Anthropic off from the economy, what's next for OpenAI? What happens next time when OpenAI tries to assert some sort of redline?
What's the point of any talk of "AI Safety" if you sign on to a regime where Hegseth (of all people) can just demand the keys and you hand them right over?
latexr|1 day ago
And you believe the US government, let alone the current one will respect that? Why? Is it naïveté or do you support the current regime?
> If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit.
So your logic is your company is selling harmful technology to a bunch of known liars who are threatening to invade democratic countries, but because they haven’t lied yet in this case (for lack of opportunity), you’ll wait until the harm is done and then maybe quit?
I’ll go out on a limb and say you won’t. You seem to be trying really hard to justify to yourself what’s happening so you can sleep at night.
Know that when things go wrong (not if, when), the blood will be on your hands too.
roflburger|1 day ago
segmondy|1 day ago
andsoitis|1 day ago
The evidence seems to overwhelmingly point in the opposite direction.
mda|1 day ago
exizt88|1 day ago
Griffinsauce|1 day ago
What is your red line?
kaashif|1 day ago
OpenAI agrees to be put in the same position as Anthropic.
It seems like you must actually somehow believe that history will repeat itself, Hegseth will deem OpenAI a supply chain risk too, then move to Grok or something?
There's surely no way that's actually what you believe...
virtualritz|1 day ago
I don't mean this in any way rude and I apologize if this comes accross as such but believing it won't be used in exactly this way is just naive. History has taught us this lesson again and again and again.
[1] https://news.ycombinator.com/item?id=47189650#47189970
unknown|1 day ago
[deleted]
fluidcruft|1 day ago
There's a big difference between "the government won't use our tools for domestic surveillance" (DoW/DoD/OpenAI/etc) and "we won't allow anyone to use our tools to support domestic surveillance by the government" (Anthropic)
Hegseth and the current Trump admin are completely incompetent in execution of just about everything but competent administrations (of both parties) have been playing this game for a long time and it's already a lost cause.
assimpleaspossi|1 day ago
trvz|1 day ago
curiousgal|1 day ago
nullocator|1 day ago
datsci_est_2015|1 day ago
germandiago|1 day ago
I do not know but I would not very optimistic about those new terms.
Qiu_Zhanxuan|1 day ago
motbus3|1 day ago
Someone might just create a spawn of openai with a tag and do all the stuff there...
There is no much guarantee I think
sensanaty|1 day ago
q3k|1 day ago
ryan_n|1 day ago
leptons|1 day ago
And the US Military is forbidden from operating on US soil, but that didn't stop this administration from deploying US Marines to California recently.
You're fooling yourself if you think this administration is following any kind of rule.
vimda|1 day ago
mpalmer|1 day ago
Nekorosu|1 day ago
dannyfreeman|1 day ago
4b11b4|1 day ago
retornam|1 day ago
Standing up for whats right often is not easy and involves hard choices and consequences, your leader has shown you and the world that he is not to be trusted.
I can't tell you what to do but I hope you make the right decision.
bambax|1 day ago
unknown|1 day ago
[deleted]
mmanfrin|1 day ago
mathisfun123|1 day ago
https://en.wikipedia.org/wiki/Motivated_reasoning
unknown|1 day ago
[deleted]
unknown|1 day ago
[deleted]
unknown|1 day ago
[deleted]
unknown|1 day ago
[deleted]
tibbydudeza|1 day ago
Spooky23|1 day ago
Y’all are developing amazing technology. But accept reality and drop whatever sense of moral righteousness you’re carrying here. Not because some asshole on the internet says so, but for your own mental health.
thisisit|1 day ago
I think its wrong for someone to ask someone to resign but acting that there is no issue here is debating in bad faith.
raw_anon_1111|1 day ago
You’re being purposefully niave if you trust any government and especially this government to behave legally or ethically.
outside1234|1 day ago
Or Sam bribed the government to do this, which is also entirely possible.
ALittleLight|1 day ago
If you think that means your company isn't going to be involved in lethal autonomous weapons and mass domestic surveillance... I don't really know what to tell you. I doubt you really believe that. Obviously you will be involved in that and you are effectively working on those projects now.
UncleMeat|1 day ago
unknown|1 day ago
[deleted]
cyanydeez|1 day ago
fooker|1 day ago
matkoniecz|1 day ago
> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons
Your understanding is entirely wrong. At least stop lying to yourself and admit that you are entirely fine with working on evil things if you are paid enough.
johnwheeler|1 day ago
Is it really worth the long-term risk being associated with Sam Altman when the other firms would willingly take you and probably give you a pay bump to boot?
It doesn't make sense to me why anyone would want to associate themselves with Altman. He is universally distrusted. No one believes anything he says. It's insane to work with a person who PG, Ilya, Murati, Musk have all designated a liar and just general creep.
Defending him or the firms actions instantly makes you look terrible, like you'll gladly take the "Elites vs UBI recipients" his vision propagates.
Shame on you people. What a disgusting vision.
wjekkekene|1 day ago
unknown|1 day ago
[deleted]
wanderlust123|1 day ago
popalchemist|1 day ago
jackmott42|1 day ago
Imustaskforhelp|1 day ago
One got characterized as supply chain risk and so much for OpenAI to get the same.
And even that being said, I can be wrong but if I remember, OpenAI and every other company had basically accepted all uses and it was only Anthropic which said no to these two demands.
And I think that this whole scenario became public because Anthropic denied, I do think that the deal could've been done sneakily if Anthropic wanted.
So now OpenAI taking the deal doesn't help with the fact that to me, it looks like they can always walk back and all the optics are horrendous to me for OpenAI so I am curious what you think.
The thing which I am thinking OTOH is why would OpenAI come and say, hey guys yea we are gonna feed autonomous killing machines. Of course they are gonna try to keep it a secret right before their IPO and you are an employee and you mention walking out of openAI but with the current optics, it seems that you/other employees of OpenAI are also more willing to work because evidence isn't out here but to me, as others have pointed out, it looks like slowly boiling the water.
OpenAI gets to have the cake and eat it too but I don't think that there's free lunch. I simply don't understand why DOD would make such a high mess about Anthropic terms being outrageous and then sign the same deal with same terms with OpenAI unless there's a catch. Only time will tell though how wrong or right I am though.
If I may ask, how transparent is OpenAI from an employees perspective? Just out of curiosity but will you as a employee get informed of if OpenAI's top leadership (Sam?) decided that the deal gets changed and DOD gets to have Autonomous killing machine. Would you as an employee or us as the general public get information about it if the deal is done through secret back doors. Snowden did show that a lot of secret court deals were made not available to public until he whistleblowed but not all things get whistleblowed though, so I am genuinely curious to hear your thoughts.
make3|1 day ago
jdiaz97|1 day ago
zingerlio|1 day ago
[1]: https://www.wired.com/story/openai-staff-walk-protest-sam-al...
randall|1 day ago
tempaccount420|2 days ago
In my mind the only people left are those who are there for the stocks.
AbstractH24|1 day ago
DANmode|1 day ago
bobanrocky|1 day ago
arugulum|1 day ago
But they did.
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
layer8|1 day ago
WD-42|1 day ago
My bet is that what the DoW wants is pretty clearly tied to mass surveillance and kill-bots. Altman is a snake.
khalic|1 day ago
propagandist|1 day ago
And they are crossing the picket line, which honestly I was sure they would do, though I did expect it to take a bit longer.
This is too transparent even for sama.
newguytony|1 day ago
unknown|1 day ago
[deleted]
unknown|1 day ago
[deleted]
adampunk|1 day ago
[deleted]
fooker|1 day ago
You could recoup your investment in a year by collecting toll. Expedited financing available on good credit!
2snakes|1 day ago
unknown|1 day ago
[deleted]
weatherlite|1 day ago
Well some may voluntarily leave, some will be actively poached by Anthropic perhaps and some I suppose will stay in their jobs because leaving isn't an easy decision to make.
latexr|1 day ago
Anyone who chooses to stay shouldn’t have signed the letter. What’s the point of doing it if you’re not going to follow through? If you signed the letter and don’t leave after the demands aren’t met, you’re a liar and a coward and are actively harming every signatory of every future letter.
ecocentrik|1 day ago
coliveira|1 day ago
miohtama|1 day ago
https://www.theguardian.com/world/2026/feb/21/tumbler-ridge-...
granzymes|1 day ago
Have we been watching the same Trump admin for the last year? That sound exactly like something the government would do: pointlessly throw a fit and end up signing a worse deal after blowing up all political capital.
unethical_ban|1 day ago
davidw|1 day ago
ivan_gammel|1 day ago
pluc|1 day ago
4ndrewl|1 day ago
vander_elst|1 day ago
Sometimes money is more attractive than morality. So I guess money is the answer here.
chazftw|1 day ago
hirvi74|1 day ago
Do you mean the same OpenAI that has a retired U.S. Army General & former director of the NSA (Gen. Nakasone) serving on its board of directors?
chpatrick|1 day ago
KellyCriterion|1 day ago
crowcroft|8 hours ago
no_wizard|1 day ago
So using Anthropic’s own words to cover a power play or pulling relationships to see if they could get anthropic to balk at it.
raw_anon_1111|1 day ago
https://www.levels.fyi/companies/openai/salaries
the_real_cher|1 day ago
Woolad theyll create the autonomous military robots themselves for that check.
tjwebbnorfolk|1 day ago
um, easy -- everyone has a price. Some of the most highly-paid workers on the planet work there.
Pay me $5M/yr and there are a LOT of things I wouldn't do for $300k.
garyclarke27|1 day ago
shevy-java|1 day ago
unknown|1 day ago
[deleted]
lazide|1 day ago
tmpz22|1 day ago
The morals were just there while it was easy virtue signaling.
Same for almost all Google, Facebook, etc. Prove me wrong, please.
righthand|1 day ago
outside1234|1 day ago
foo12bar|1 day ago
vineyardmike|2 days ago
There is more to this story behind the scenes. The government wanted to show power and control over our companies and industries. They didn’t need those terms for any specific utility, they wanted to fight “woke” business that stood up to them.
Supposedly OpenAI had the same terms as Anthropic (according to SamA). Maybe they offered it cheaper and that’s why they agreed. Maybe it’s all the lobbying money from OpenAI that let the government look the other way. Maybe it’s all the PR announcements SamA and Trump do together.
sigmar|1 day ago
"we put them into our agreement." is strange framing is Altman's tweet. Makes me think the agreement does mention the principles, but doesn't state them as binding rules DoD must follow.
Imnimo|1 day ago
harmonic18374|1 day ago
I don't necessarily think he's lying, but there's so much obvious incentive for him to lie here (if only because his employees can save face).
pseudalopex|1 day ago
He said human responsibility. Anthropic said human in the loop.
And Anthropic refused to say any lawful purpose would be allowed reportedly.
SpicyLemonZest|2 days ago
Ultimately, I don't know how much the specific reasons matter. Pete Hegseth must be removed from office, OpenAI must be destroyed for their betrayal of the US public, that's all there is to it.
jeffbee|1 day ago