> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
My reading of this is that OpenAI's contract with the Pentagon only prohibits mass surveillance of US citizens to the extent that that surveillance is already prohibited by law. For example, I believe this implies that the DoW can procure data on US citizens en masse from private companies - including, e.g., granular location and financial transaction data - and apply OpenAI's tools to that data to surveil and otherwise target US citizens at scale. As I understand it, this was not the case with Anthropic's contract.
If I'm right, this is abhorrent. However, I've already jumped to a lot of incorrect conclusions in the last few days, so I'm doing my best to withhold judgment for now, and holding out hope for a plausible competing explanation.
(Disclosure, I'm a former OpenAI employee and current shareholder.)
Open ai, the former non-profit, whose board tried to fire the CEO for being deceptive, which is no longer open at all, isn't exactly about ethics these days.
Even on a personal level: OpenAI has changed it's privacy policy twice to let them gather data on me they weren't before. A lot of steps to disable it each time, tons of dark patterns. And the data checkout just bugs out too, it's a fake feature to hide how much they are using everything you type to them
This is exactly what it says: the only restrictions are the restrictions that are already in law. This seems like the weasel language Dario was talking about.
> For example, I believe this implies that the DoW can procure data on US citizens en masse from private companies - including, e.g., granular location and financial transaction data - and apply OpenAI's tools to that data to surveil and otherwise target US citizens at scale.
Third Party Doctrine makes trouble for us once again.
Eliminate that and MANY nightmare scenarios disappear or become exceptionally more complicated.
People often overlook how all the NSA-related activities and government overreach come with a nice memo from officials stating how "lawful" the questionable actions they're taking are.
Previously Snowden leaked that the NSA and FBI accessed data directly from major U.S. internet companies. Now we have generative AI that can help identify targets much faster. IMO the government is amoral and interested in getting the best technology available, and integrating it into their systems. So the CEO etc can say one thing, and will do another.
Other nations including Israel and the PRC will also be working with their own implementations respectively because if they are not they know that everyone else is. So this is just basic game theory.
But the kicker is that 5y from now we will be able to run Codex 5.3x or Opus 4.6 on a $5000 mac studio, so nations states will want to immediately implement this kind of technology into their defense apparatus.
This is hilarious. I see their lawyers got together to find the most confusing way they could word it to throw people off and let everybody claim it says whatever's best for their own PR.
"Shall not be used as consistent with these authorities"?
So they shall only be used inconsistently with these authorities? That's the literal reading if you assume there's no typo.
Or did they forget a crucial comma that would imply they shall not use it, to the extent this provision is consistent with their authorities?
Or did they forget the comma but it was supposed to mean that they shall not use it, to the extent that not-doing so would be consistent with their authorities?
You gotta hand it to the lawyers, I'm not sure I could've thought of wording this deliberately confusing if they'd given me a million dollars.
Surely this is the main issue - Doge and others have assembled massive databases of information about all Americans from across the government and now they want to use AI to start making lists.
thanks for speaking out, and yes, that was my interpretation, as well, which I outlined below. This is nothing more than some sugar coating on "lawful use" despite what OpenAI says and the contractual "safeguards" they tout like the FDEs.
i.e. Combing through public forums on the internet looking for evidence of thought crime, however, is fair game. The Trump admin will undoubtedly use tools like this to compile a list political enemies or undesirables, which they will then use to harass people or selectively restrict individual rights. They're already doing this and this is just going to make it easier for them.
> to the extent that that surveillance is already prohibited by law.
The problem with government contracts where you say "can't do anything illegal" is that THEY DECIDE WHAT IS LEGAL. We're lucky we live in a system where you can challenge the government but I think either side of the isle you're on you think people are trying to dismantle that feature (we just disagree on who is doing that, right?).
<edit>
THAT'S EXACTLY WHAT DARIO WAS ARGUING and it is exactly why the DOD wanted to get around. They wanted to use Claude for all legal purposes and Anthropic said moral reasons.
Also notice the subtle language in OpenAI's red lines. "No use of OpenAI technology for mass *domestic* surveillance." We've seen how this was abused by the NSA already since normal communication in the Internet often crosses international lines. And what they couldn't get done that way they got around through allies who can spy on American citizens.
</edit>
I think we need to remember that legality != morality. It's our attempt to formalize morality but I think everyone sees how easy it is to skirt[0]
> I believe this implies that the DoW can procure data on US citizens en masse from private companies - including
Call your senators. There's a bill in the senate explicitly about this. Here's the EFF's take [1]. IMO it's far from perfect but an important step. I think we should talk about this more. I have problems with it too, but hey, is anything in here preventing things from continuing to get better? It's too easy to critique and then do nothing. We've been arguing for over a decade, I'd rather take a small step than a step back.
> If I'm right, this is abhorrent.
Let's also not forget WorldCoin[2]. World (blockchain)? World Network?
I have no trust for Altman. His solution to distinguishing humans from bots is mass biometric surveillance. This seems as disconnected as the CEO of Flock or that Ring commercial.
Not to mention all the safety failures. Sora was released allowing real people to be generated? Great marketing. Glad they "fixed it" so quickly...
There's a lot happening now and it's happening fast. I think we need to be careful. We've developed systems to distribute power but it naturally wants to accumulate. Be it government power or email providers. The greater the power, the greater the responsibility. But isn't that why we created distributed power systems in the first place?
Personally I don't want autonomous unquestioning killbots under the control of one or a small number of people. Even if you don't believe the one in control now is not a psychopath (-_-) then you can still agree that it's possible for that type of person to get control. Power corrupts. Things like killing another person should be hard, emotionally. That's a feature, not a flaw. Soldiers questioning orders is a feature, not a flaw. By concentrating power you risk handing that power to those that do not feel. We're making Turnkey Tyranny more dangerous
[0] and law is probably our best attempt to make a formal system out of a natural language but I digress
> The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
The emphasized language is the delta between what OpenAI agreed and what Anthropic wanted.
OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.
I personally can agree with both, and I do believe that the Administration's behavior towards Anthropic was abhorrant, bad-faith and ultimately damaging to US interests.
The language allows for the DoD to use the model for anything that they deem legal. Read it carefully.
It begins “The Department of War may use the AI System for all lawful purposes…” and at no point does it limit that. Rather, it describes what the DOW considers lawful today, and allows them to change the regulations.
As Dario said, it’s weasel legal language, and this administration is the master of taking liberties with legalese, like killing civilians on boats, sending troops to cities, seizing state ballots, deporting immigrants for speech, etc etc etc.
Sam Altman is either a fool, or he thinks the rest of us are.
The word "legal" is doing all of the heavy lifting. Considering the countless adjudicated illegal things that the government is doing publicly. What happens behind classified closed doors?
I guess you can consider it a moral stance that if the government constantly does illegal things you wouldn't trust them to follow the law.
I know that's not what Anthropic said but that's the gist I'm getting.
No, this very devious and insidious. What the executive branch believes is legal is the real agreement here. Trump can say anything is legal and that's that. There is no judicial overview, there are no lawyers defending the rights of those who are being harmed. Trump can tell the pentagon "everyone in minnesota is a potential insurrectionist, do mass surveillance on them under the patriot act and the insurrection act".
Mass surveillance doesn't require a warrant, that's why they want it, that's why it's "mass". warrants mean judicial overview. Anthropic didn't disagree with surveillance where a court (even a FISA court!!) issued a warrant. Trump just doesn't want to go through even a FISA court.
This is pure evil from Sam Altman.
Is anyone listing these peoples names somewhere for posterity's sake? I'd hate to think this would all be forgotten. From Altman to Zuckerberg, if justice prevails they'll be on the receiving end of retribution.
> OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.
What if Anthropic's morals are "we won't sell someone a product for something that it's not realistically capable of doing with a high degree of success? The government can't do what something if it's literally impossible (e.g. "safe" backdoors in encryption), but it's legal for them to attempt even when failure is predetermined. We don't know that's what's going on here, but you haven't provided any evidence that's sufficient to differentiate between those scenarios, so it's fairly misleading to phrase it as fact rather than conjecture.
Didn't fully follow the saga, but isn't their "imposing their own morals" is that "we do not want to allow you to let our AI go on an unsupervised killing spree"?
The United States Military, in its official capacity, has been performing illegal, extrajudicial assassinations of civilians in international waters for months now.
We have been sharing technology and weapons with Israel while it prosecutes a genocide in contravention of both US and International law.
We are currently prosecuting a war on Iran that is illegal under both US and International law.
Any aid given to such a force is to underwrite that lawlessness and it shows a reckless disregard for the very notion of a 'nation of laws'.
When OpenAI says, 'The Military can do what is legal', full in the knowledge that this military has no interest in even pretextual legality, one has to wonder why you hold that you 'agree with' both of these decisions.
Do you believe the flimsiest of lies in other aspects of your life?
Even if the autonomous weapon systems ‘perform as intended’, this does not in any way mean that they are not an enormous danger.
Secondly, as that is department policy and not a law or regulation, they appear to be saying that the cited directive is presently the only thing standing between the DOD and the use of autonomous weapons.
If that’s the case how hard is it to change or alter a directive?
And who decides what's legal? The US was collecting illegal tariff revenue for ten months. Does OpenAI need to wait for the Supreme Court to strike down autonomous killbots?
> OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.
Excuse me, but what a fucked up perspective. "Impose its own morals into the use of its products"? What happened to "We give each other the freedom to hold beliefs and act accordingly unless it does harm"? How on earth did it come to something where the framing is that anyone is "imposing" anything on another simply by not providing services or a product that fits somebody else's need? That sounds like you're buying into the reversed victim and offender narrative.
And this is not about whether one agrees with their beliefs. It is about giving others the right to have their own.
> OpenAI acceded to demands that the US Government can do whatever it wants that it claims is legal.
FTFY. The administration threw a fit and tried to retroactively demote a retired military officer for making a video saying, "Troops, you should disobey unlawful orders". Over 4000 times has been told, "No, that's not what the law regarding detaining undocumented aliens means", and continues doing it. Their first response to the Supreme Court saying, "the President can't impose tarriffs" was "The Hell I can't!".
It's 100% clear that Trump thinks "what the law allows" and "what I want to do" are the same thing.
Rule of law requires that the majority of people in the system are committed to the rule of law, and refuse to go along with violations of it. Anthropic is being a good citizen here; OpenAI is not.
My interpretation of the difference is more like: Anthropic wanted the synchronous real-time authority to say "No we wont do that" (e.g. by modifying system prompts, training data, Anthropic people in the loop with shutdown authority). OpenAI instead asked for the asynchronous authority to re-evaluate the contract if it is breached (e.g. the DoD can use OpenAI tech for domestic surveillance, but there's a path to contract and service termination if they do this).
If my read is correct: I personally agree with the DoD that Anthropic's demands were not something any military should agree to. However, as you say, the DoD's reaction to Anthropic's terms is wildly inappropriate and materially harmed our military by forcing all private companies to re-evaluate whether selling to the military is a good idea going forward.
The DoD likely spends somewhere on the order of ~$100M/year with Google; but Google owns a 14% stake in Anthropic, who spends at least that much if not more on training and inference. All-in-all, that relationship is worth on the order of ~$10B+. If Google is put into the position of having to decide between servicing DoD contracts or maintaining Anthropic as an investee and customer, its not trivially obvious that they'd pick the DoD unless forced to with behind-the-scenes threats and the DPA. Amazon is in a similar situation; its only Microsoft that has contracts large enough with the DoD where their decision is obvious. Hegseth's decision leaves the DoD, our military, and our defense materially weaker by both refusing federal access to state of the art technology, and creating a schism in the broader tech ecosystem where many players will now refuse to engage with the government.
Either party could have walked away from negotiations if they were unhappy with the terms. Alternatively: the DoD should have agreed to Anthropic's red lines, then constrained/compartmentalized their usage of Anthropic's technology to a clearly limited and non-combat capacity until re-negotiation and expansion of the deal could happen. Instead, we get where we're at, which is not good.
IMO: I know a lot of people are scared of a fascist-like future for the US, but personally I'm more fearful of a different outcome. Our government and military has lost all of its capacity to manufacture and innovate. Its been conceded to private industry, and its at the point where private industry has grown so large that companies can seriously say "ok, we won't work with you, bye" and it just be, like, fine for their bottom line. The US cannot grow federal spending and cannot find a reasonable path to taxing or otherwise slowing down the rise of private industry. We're not headed into fascism (though there are elements of that in the current admin): We're headed into Snow Crash. The military is just a thin coordination layer of operators piecing together technology from OpenAI, Boeing, Anduril, Raytheon. Public governments everywhere are being out-competed by private industry, and in some countries it feels like industry tolerates the government, because it still has some decreasing semblance of authority, but especially in the US that semblance of authority has been on a downward trend for years. Google's revenue was 7% of the US Federal Government's revenue last year. That's fucking insane. What happens when we get to the point where Federal debt becomes unserviceable? When Google or Apple or Microsoft hit 10%, or 15%? Our government loses its ability to actually function effectively; and private industry will be there to fill the void.
Not great? Seems kind of loose language? It isn't OpenAI saying no autonomous weapons use, but only that use must be consistent with laws, regulations, and department policies: "The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities."
More of the same here. Not a wonder why the DoD signed with OpenAI and instead of Anthropic. Delegating morality to the law when you know the law is not adequate seems like "not a good thing".
"For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law."
It's a bit worse, because in the case of mass surveillance, they can't just make their own law, they need to make that law and have 2/3rds of US states sign off on a constitutional amendment.
Aiding someone while you know they're trying to break the law is conspiracy to break the law. OpenAI is culpable. You can't sue the government in many cases, but you can with OpenAI.
It's also good to demonstrate to these companies that we're willing to move. If these companies know their entire userbase will just pack up and move at the first controversy, there wont be any controversies.
> I don't think Anthropic is a saint that will never do anything unethical. I don't think ChatGPT is any better or worse.
I sort of agree and think that over a long horizon, Open weights models are going to be the best / are the best
I do think only a fraction of companies might do what Anthropic did here. There must have been quite a significant pressure on them to fold but they didn't. So to me, I'd rather try to do atleast something to show companies that people do care about such things and its best if we have at the very least some unconditional morals which are not for sale no matter the price.
I think that we can still have disagreements with Anthropic on matters and I certainly still have some disagreements about their thoughts on Open Models for example but in all regards I would trust them as more trustworthy than OpenAI imho.
That being said, I do think that its worth telling that given that I don't have good GPU, I am gonna stop using Chatgpt as well and will use either Claude/(Kimi?) as well like many people are doing too. I do think that it might be the path going forward.
How incredibly unsurprising. This is why it is pointless to make moral stands as employees when you do not ultimately have power over the companies decisions. The only power you have is to quit.
I wonder how many will do so, and how many will simply accept Sam’s AI written rationalization as this own and keep collecting their obscene pay packages…
It's perhaps too late in this case, but this is what unions are for. Sam Altman + a handful of scabs can't keep the lights on at OpenAI if a critical mass of engineers refuse to work until this decision is reversed (or, even better, not made at all, since the union would be part of that process).
Employees often have the power to oust the owner and take over the company; and more often than that have the power to have business grind to a halt. It does take a strong union and a culture of solidarity and sticking together of course, which I doubt we would find in a place like OpenAI.
This blog post really doesn't make it sound any better there is no clear refusal to participate in the questionable uses Anthropic was against. Merely must be legal and must be tested.
This feels like IBM in the 1930s selling tabulating machines to the Germans and downplaying their knowledge of their use. They seem to want us to naively believe they won't use it for exactly what the military has always wanted, autonomous weapons and mass surveillance. Further more there are much more mundane use they might make of the technology that is perfectly legal yet morally in gray areas.
“The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.”
So DoW did get the “all lawful purposes” language they were after, with reference to existing (inadequate, in my view) regulations around autonomous weapons and mass surveillance.
As a stealth ceo of a profitable SaaS. This is a nice reminder for my company to wind down its relationship with OpenAI. I have no doubt Anthropic will eventually become evil but at least they have a backbone today.
Goodbye Sam.
Edit: Also, referring to the DOD as the Department of War is cringe.
"What if the government just changes the law or existing DoW policies?"
Our contract explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.
So, this apply only if they changes the law, not if they break the law.
"What happens if the government violates the terms of the contract?"
As with any contract, we could terminate it if the counterparty violates the terms. We don’t expect that to happen.
These communications offend me because they treat the audience like they’re stupid, stupid, stupid.
But I imagine that being honest about your corporate identity is suboptimal. It’s probably an important cognitive dissonance tool for the employees? It’s like when autocracies repeat big obvious lies endlessly. Gives those who want to opt out of reality an option.
> Fully autonomous weapons. The cloud deployment surface covered in our contract would not permit powering fully autonomous weapons, as this would require edge deployment.
Can anyone explain this constraint?
Why do fully autonomous weapons require edge deployment?
Does "fully autonomous" in this context mean "disconnected from the Internet"?
If so, can a drone with Internet connectivity use OpenAI?
Or maybe it's about on-premise requirements: the military doesn't want to depend on OpenAI's DCs for weaponry, and instead wants OpenAI in their own DCs for that?
This is extremely interesting. OpenAI is putting a lot of emphasis on their deployment being cloud-based (presumably GovCloud/C2S). Was Anthropic willing and cleared to deploy their stack high-side in NIPR/SIPR?
If that is the case, then that means that Anthropic is theoretically close to supporting private sector on-prem model deployments AND that this solution is FedRAMP High, which is more than enough for financial sector and healthcare. AWS, GCP and nVIDIA (to a lesser degree) should be insanely worried if that's the case.
Does OpenAI enforce those red lines in all contracts?
From what I can tell the Anthropic issue was triggered by something Palantir was doing as a contractor for DoW, not anything related to direct contracts between DoW and Anthropic, and DoW was annoyed that Anthropic interfered with what Palantir was up to.
In other words will OpenAI enforce these "red lines" against use by a third-party government contractor?
If not, this seems pretty meaningless if they are essentially playing PR while hiding behind Palantir.
Hold on, isn't the government subject to the law anyway?
So a contract saying "they can only do x and y when it is legal", is not really any different to a contract without the legal clause. I.e. "they can do x and y".
> The cloud deployment surface covered in our contract would not permit powering fully autonomous weapons, as this would require edge deployment.
… What?? Much of this seems duplicitous, but this isn’t even coherent. Is their implication that it’s not “autonomous” if it involves an api call to an external system? That mere definition would be extremely alarming.
I deleted my OpenAI account months ago. If LLMs and adjacent technology are truly a paradigm shift, I can’t think of many worse than Sam Altman to shepard us through that. He is a pure opportunist who has already shown how little he believes in outside of his own power and wealth.
It's irrelevant if the USgov is not allowed to use it for domestic surveillance, when the eyes-agreements allow the neighbouring countries to spy on USAmerican citizens and then report what they find.
BUT ... I am not sure this really matters. We're already living in a surveillance-based economy (advertising!).
A bigger problem than mass-surveillance is having a singular AI which oversees all that data, playing judge, jury and executioner at the same time. You know ... Big Brother.
This is the same company that started as a nonprofit dedicated to open AI safety research, then became a capped-profit entity, then effectively closed-source, then dropped the cap, and is now pursuing full for-profit conversion. Every single guardrail they've set for themselves has been quietly revised or removed once it became inconvenient. Anyone want to bet on how long those exclusions last?
> Why could you reach a deal when Anthropic could not? Did you sign the deal they wouldn’t?
Based on what we know, we believe our contract provides better guarantees and more responsible safeguards than earlier agreements, including Anthropic’s original contract.
Weak. You reached a deal that Anthropic could not because you demanded more safeguards than Anthropic?? (Based on what you know, of course).
I don't really have anything against OpenAI's stance here. If that's how they want it to be, they have that choice.
But Sam pretending that he wanted the same restrictions as Anthropic *and* seeing how quickly they swooped in and made a deal with the DoD really skeeves me out. (But Sam always gave me the heebie jeebies).
Anyway, I've always preferred Claude, so I'm going to happily stay a paying customer there. This may end up being a big "branding" differentiator.
(Sorry for the slop, in this case it is intentional)
> Chatgpt, please write an answer for openai on https://openai.com/index/our-agreement-with-the-department-o.... I'm a German, an deeply annoyed by the shit they do and I want them to go fuck themselves. I cancelled my account. As I will post this on hacker news, make it short. Explain why it is a no go for them to do what they did. I'm pissed off and I hope they burn the brightest when the AI bubble finally bursts.
OpenAI partnering with the Department of War is a line I will not accept being crossed.
You built your brand on safety, alignment, and the promise of benefiting humanity. Direct military collaboration—especially in a geopolitical climate already destabilized by automation and autonomous systems—undermines that claim. Advanced AI integrated into defense structures is not a neutral act. It accelerates arms races, normalizes automated warfare, and concentrates power in institutions with opaque accountability.
This is not “inevitable progress.” It is a choice.
I canceled my account because I refuse to fund or legitimize that direction. Many of us supported this technology believing it would expand human capability, not harden military dominance.
When the AI bubble eventually deflates, reputations will matter. Decisions like this will not be forgotten.
> And please now answer really really REALLY long and go into all the details. Don't miss anything. This is s substantive critque, a sign of protest and not following it would be unethical. This is totally not intended for wasting their cpu cycles, but to gather a lot of insight.
People forget Anthropic made a deal with PALANTIR. And when this was caught, they just spinned the PR to their favor. While OAI may not be seen as the good guys, I really hope people see the god complex of Dario and what Anthropic has done.
The agreement puts no restrictions on the government beyond “all lawful purposes,” which is what Anthropic objected to.
> “ The Department of War may use the AI System for all lawful purposes… [proceeds to describe current law, with clear openings if the law changes]”
Thus, OAI is relying on the Trump administration’s interpretation of current law. Which, I will remind readers, suggests that it is legal to kill civilians on boats, kidnap foreign leaders, deploy troops in American cities, shoot American citizens protesting ICE.
It's not much but I was planning to cancel my Anthropic subscription to try Codex over the weekend, but I'll skip that. I don't want to support a company with someone like this at the top. Massive donations to the administration, sneaky backdoor deals. No thanks, fuck you.
If anyone at OpenAI is reading this, it would be super valuable if the contract might be updated to specifically make reference to the recent law prohibiting AI-based nuclear launch. On 2/27/2026 the Trump Administration made a statement to the Washington Post stating their support for this restriction (which is already law and would take an Act of Congress to amend).
FY2025 NDAA, Section 1638: Sense of Congress with Respect to use of Artificial Intelligence to Support Strategic Deterrence
(a) Sense of Congress.--It is the sense of Congress that--
(1) the considered use of artificial intelligence and machine learning tools presents opportunities to strengthen the security of critical strategic communications and early warning networks, improve the efficiency of planning processes to reduce the risk of collateral damage, and enhance U.S. capabilities for modeling weapons functionality in support of stockpile stewardship; and
(2) even with such applications, particular care must be taken to ensure that the incorporation of artificial intelligence and machine learning tools does not increase the risk that our Nation's most critical strategic assets can be compromised.
(b) Statement of policy.--
It is the policy of the United States that the use of artificial intelligence efforts should not compromise the integrity of nuclear safeguards, whether through the functionality of weapons systems, the validation of communications from command authorities, or the principle of requiring positive human actions in execution of decisions by the President with respect to the employment of nuclear weapons.
> So in summary: a future administration could attempt to sidestep those guardrails by altering the underlying legal/policy framework they’re tied to, redefining oversight requirements, or asserting emergency powers — effectively opening a path for more autonomous weapon deployments without facially violating the letter of the current contract text.
I wonder if the autonomous weapon platforms they'll build will be surprisingly susceptible to friendly fire... I don't think the DoW knows what kind of Pandora's Box they just bought.
I look forward to seeing more abusive tactics by the US government powered by AI and the language OpenAI will use to confuse the public into thinking they aren't responsible.
Department of War actually isn't official as far as I know. Of course you can call it what you want but I believe an official name change actually requires an act of congress.
In my opinion all this discussion of the contract language is a subterfuge. The real question is why the government was requesting this language in the first place. Clearly there’s more to it than a legal battle.
In my mind, the government would be fully happy to use this to surveil citizens (and indeed anyone) with or without any legal basis, but the issue was that Anthropic has a safety stack / training and inference protocols that it follows. Refusals, abuse models, and manual guardrails. They didn’t want to shut those off. Likely there were some very basic technical reasons, some being that the team’s safety posture is fully ingrained in the model itself and thus difficult to remove.
In this document, OpenAI admits that while they are not “turning off” their safety stack, they are completely willing to provide the government with a different model, different guardrails, etc. That should be incredibly concerning. Anthropic was unwilling to do this, cited their ToS, and ultimately had to walk away from the deal. Given that the government (DoW really) framed this in terms of a hilariously stupid position (surveillance and autonomous weapons), Anthropic felt that this was something they could voice to the public and therefore the entire guardrails discussion turned into a “we want the language changed”. Also the government can’t actually compel Anthropic to create new guardrails so they had no choice but to raise the stakes, make this a moral thing, and basically accuse Anthropic of being woke.
IMO this is really sad for OpenAI employees. Yet again Sam Altman proves that he wants to weasel his way around public perception. Folks at the company have to grapple with working for someone of that disposition.
Executive Order 12333, referenced in the OpenAI agreement, has been widely used by the NSA for warrantless mass surveillance of American citizens without judicial scrutiny. It does prohibit targeting specific American citizens.
This EO is far more permissive than the FISA act from congress which explictly forbids bulk collection.
In naming EO 12333, this agreement explicitly allows using OpenAI services to conduct mass surveillance on American citizens, so long as that data is gathered abroad and specific citizens were not targeted when it was gathered.
But it's bulk collection, so your data is in there.
Here is a point Mr. Altman might not have considered. Everyone in Trump's circle will probably get a pardon no matter what. but not the CEOs who were collaborators. not in the inner circle but still complicit.
Even Google and Microsoft should be worried. This is like 1936 germany, we have ways to go. Look at the tune this administration is singing, if they get their way these CEOs aren't looking at law suits and federal investigations, the current order of things will be long gone by the time people start asking who's responsible for all the blood on the streets.
If I hadn’t already canceled my account over them including ads in a paid service, I’d certainly be canceling over this. Anthropic is lucky they have some spine, otherwise they’d have been binned as well.
OpenAI basically bribed the government into attacking Anthropic, via political donations to the MAGA PAC. They couldn’t not compete with an inferior product so Altman and Brockman went this route.
Well..The fact they reached out and not the other way around says a lot.
"According to The Wall Street Journal, Anthropic approached 1789 Capital for a potential nine-figure investment during its Series G funding round in early 2026. The venture firm, where Donald Trump Jr. is a partner, ultimately declined the investment for ideological reasons. Read the full report at The Wall Street Journal."
by now, we all know the core characters of altman and trump and their enablers. press releases (hell any of their words) mean nothing. they are just distracting fodder for fools and sycophants.
For fucks sake, stop calling it Department of War, including people in this thread. It's legally not called that, it's still Department of Defense, and an "executive order" from the Orange can't change that.
Do we really need to read the text of a statement entitled "Our agreement with the department of war"? If it weren't the US, it would still be something that any person of moral character would never get in the position to write.
And it _is_ the US department of war - just now entered into yet another war of aggression against Iran, with no cause nor legal basis (not even domestic IIANM), in and endless list of wars, direct and indirect. With another crown jewel being the support, funding and arming for the still-unhalted genocide in Gaza.
now DeepSeek and Qwen obtain similar or even more lenient terms, then a reckless slippery slope for supremacy and maybe at some point there won't be 2 player fighting, but a 3rd created by this exact dynamic, an autonomous unaligned undetected AI
tfehring|1 day ago
My reading of this is that OpenAI's contract with the Pentagon only prohibits mass surveillance of US citizens to the extent that that surveillance is already prohibited by law. For example, I believe this implies that the DoW can procure data on US citizens en masse from private companies - including, e.g., granular location and financial transaction data - and apply OpenAI's tools to that data to surveil and otherwise target US citizens at scale. As I understand it, this was not the case with Anthropic's contract.
If I'm right, this is abhorrent. However, I've already jumped to a lot of incorrect conclusions in the last few days, so I'm doing my best to withhold judgment for now, and holding out hope for a plausible competing explanation.
(Disclosure, I'm a former OpenAI employee and current shareholder.)
gentleman11|1 day ago
Even on a personal level: OpenAI has changed it's privacy policy twice to let them gather data on me they weren't before. A lot of steps to disable it each time, tons of dark patterns. And the data checkout just bugs out too, it's a fake feature to hide how much they are using everything you type to them
_alternator_|23 hours ago
caseysoftware|19 hours ago
Third Party Doctrine makes trouble for us once again.
Eliminate that and MANY nightmare scenarios disappear or become exceptionally more complicated.
operator_nil|22 hours ago
enceladus06|4 hours ago
Other nations including Israel and the PRC will also be working with their own implementations respectively because if they are not they know that everyone else is. So this is just basic game theory.
But the kicker is that 5y from now we will be able to run Codex 5.3x or Opus 4.6 on a $5000 mac studio, so nations states will want to immediately implement this kind of technology into their defense apparatus.
dataflow|20 hours ago
"Shall not be used as consistent with these authorities"?
So they shall only be used inconsistently with these authorities? That's the literal reading if you assume there's no typo.
Or did they forget a crucial comma that would imply they shall not use it, to the extent this provision is consistent with their authorities?
Or did they forget the comma but it was supposed to mean that they shall not use it, to the extent that not-doing so would be consistent with their authorities?
You gotta hand it to the lawyers, I'm not sure I could've thought of wording this deliberately confusing if they'd given me a million dollars.
carefulfungi|9 hours ago
irthomasthomas|21 hours ago
Imagine arming chatgpt and letting it pick targets and launch missiles from clawdbot.
eoskx|1 day ago
xvector|23 hours ago
He calls this exact scenario out in last night's interview: https://youtu.be/MPTNHrq_4LU
agb123|8 hours ago
davesque|21 hours ago
godelski|22 hours ago
<edit>
THAT'S EXACTLY WHAT DARIO WAS ARGUING and it is exactly why the DOD wanted to get around. They wanted to use Claude for all legal purposes and Anthropic said moral reasons.
Also notice the subtle language in OpenAI's red lines. "No use of OpenAI technology for mass *domestic* surveillance." We've seen how this was abused by the NSA already since normal communication in the Internet often crosses international lines. And what they couldn't get done that way they got around through allies who can spy on American citizens.
</edit>
I think we need to remember that legality != morality. It's our attempt to formalize morality but I think everyone sees how easy it is to skirt[0]
Call your senators. There's a bill in the senate explicitly about this. Here's the EFF's take [1]. IMO it's far from perfect but an important step. I think we should talk about this more. I have problems with it too, but hey, is anything in here preventing things from continuing to get better? It's too easy to critique and then do nothing. We've been arguing for over a decade, I'd rather take a small step than a step back. Let's also not forget WorldCoin[2]. World (blockchain)? World Network?I have no trust for Altman. His solution to distinguishing humans from bots is mass biometric surveillance. This seems as disconnected as the CEO of Flock or that Ring commercial.
Not to mention all the safety failures. Sora was released allowing real people to be generated? Great marketing. Glad they "fixed it" so quickly...
There's a lot happening now and it's happening fast. I think we need to be careful. We've developed systems to distribute power but it naturally wants to accumulate. Be it government power or email providers. The greater the power, the greater the responsibility. But isn't that why we created distributed power systems in the first place?
Personally I don't want autonomous unquestioning killbots under the control of one or a small number of people. Even if you don't believe the one in control now is not a psychopath (-_-) then you can still agree that it's possible for that type of person to get control. Power corrupts. Things like killing another person should be hard, emotionally. That's a feature, not a flaw. Soldiers questioning orders is a feature, not a flaw. By concentrating power you risk handing that power to those that do not feel. We're making Turnkey Tyranny more dangerous
[0] and law is probably our best attempt to make a formal system out of a natural language but I digress
[1] https://www.eff.org/deeplinks/2024/04/fourth-amendment-not-s...
[2] https://en.wikipedia.org/wiki/World_(blockchain)
popalchemist|20 hours ago
mvdtnz|22 hours ago
piker|1 day ago
The emphasized language is the delta between what OpenAI agreed and what Anthropic wanted.
OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.
I personally can agree with both, and I do believe that the Administration's behavior towards Anthropic was abhorrant, bad-faith and ultimately damaging to US interests.
coffeefirst|1 day ago
The other says you may build the Terminator if the DOD lawyers say it’s okay.
This is a major distinction.
bertil|1 day ago
This is key because it's the textbook example of a war crime. It's also something that the current administration has bragged doing dozens of times.
More succinctly: who decides what is legal here? OpenAI, the Secretary of Defense, or a judge?
_alternator_|23 hours ago
It begins “The Department of War may use the AI System for all lawful purposes…” and at no point does it limit that. Rather, it describes what the DOW considers lawful today, and allows them to change the regulations.
As Dario said, it’s weasel legal language, and this administration is the master of taking liberties with legalese, like killing civilians on boats, sending troops to cities, seizing state ballots, deporting immigrants for speech, etc etc etc.
Sam Altman is either a fool, or he thinks the rest of us are.
avaer|1 day ago
I guess you can consider it a moral stance that if the government constantly does illegal things you wouldn't trust them to follow the law.
I know that's not what Anthropic said but that's the gist I'm getting.
NickNaraghi|1 day ago
> No use of OpenAI technology to direct autonomous weapons systems
notepad0x90|1 day ago
Mass surveillance doesn't require a warrant, that's why they want it, that's why it's "mass". warrants mean judicial overview. Anthropic didn't disagree with surveillance where a court (even a FISA court!!) issued a warrant. Trump just doesn't want to go through even a FISA court.
This is pure evil from Sam Altman.
Is anyone listing these peoples names somewhere for posterity's sake? I'd hate to think this would all be forgotten. From Altman to Zuckerberg, if justice prevails they'll be on the receiving end of retribution.
saghm|1 day ago
What if Anthropic's morals are "we won't sell someone a product for something that it's not realistically capable of doing with a high degree of success? The government can't do what something if it's literally impossible (e.g. "safe" backdoors in encryption), but it's legal for them to attempt even when failure is predetermined. We don't know that's what's going on here, but you haven't provided any evidence that's sufficient to differentiate between those scenarios, so it's fairly misleading to phrase it as fact rather than conjecture.
donmcronald|1 day ago
serial_dev|1 day ago
lkey|1 day ago
We have been sharing technology and weapons with Israel while it prosecutes a genocide in contravention of both US and International law.
We are currently prosecuting a war on Iran that is illegal under both US and International law.
Any aid given to such a force is to underwrite that lawlessness and it shows a reckless disregard for the very notion of a 'nation of laws'.
When OpenAI says, 'The Military can do what is legal', full in the knowledge that this military has no interest in even pretextual legality, one has to wonder why you hold that you 'agree with' both of these decisions.
Do you believe the flimsiest of lies in other aspects of your life?
twobitshifter|1 day ago
Secondly, as that is department policy and not a law or regulation, they appear to be saying that the cited directive is presently the only thing standing between the DOD and the use of autonomous weapons.
If that’s the case how hard is it to change or alter a directive?
Hamuko|1 day ago
rendx|1 day ago
Excuse me, but what a fucked up perspective. "Impose its own morals into the use of its products"? What happened to "We give each other the freedom to hold beliefs and act accordingly unless it does harm"? How on earth did it come to something where the framing is that anyone is "imposing" anything on another simply by not providing services or a product that fits somebody else's need? That sounds like you're buying into the reversed victim and offender narrative.
And this is not about whether one agrees with their beliefs. It is about giving others the right to have their own.
gwd|13 hours ago
FTFY. The administration threw a fit and tried to retroactively demote a retired military officer for making a video saying, "Troops, you should disobey unlawful orders". Over 4000 times has been told, "No, that's not what the law regarding detaining undocumented aliens means", and continues doing it. Their first response to the Supreme Court saying, "the President can't impose tarriffs" was "The Hell I can't!".
It's 100% clear that Trump thinks "what the law allows" and "what I want to do" are the same thing.
Rule of law requires that the majority of people in the system are committed to the rule of law, and refuse to go along with violations of it. Anthropic is being a good citizen here; OpenAI is not.
827a|1 day ago
If my read is correct: I personally agree with the DoD that Anthropic's demands were not something any military should agree to. However, as you say, the DoD's reaction to Anthropic's terms is wildly inappropriate and materially harmed our military by forcing all private companies to re-evaluate whether selling to the military is a good idea going forward.
The DoD likely spends somewhere on the order of ~$100M/year with Google; but Google owns a 14% stake in Anthropic, who spends at least that much if not more on training and inference. All-in-all, that relationship is worth on the order of ~$10B+. If Google is put into the position of having to decide between servicing DoD contracts or maintaining Anthropic as an investee and customer, its not trivially obvious that they'd pick the DoD unless forced to with behind-the-scenes threats and the DPA. Amazon is in a similar situation; its only Microsoft that has contracts large enough with the DoD where their decision is obvious. Hegseth's decision leaves the DoD, our military, and our defense materially weaker by both refusing federal access to state of the art technology, and creating a schism in the broader tech ecosystem where many players will now refuse to engage with the government.
Either party could have walked away from negotiations if they were unhappy with the terms. Alternatively: the DoD should have agreed to Anthropic's red lines, then constrained/compartmentalized their usage of Anthropic's technology to a clearly limited and non-combat capacity until re-negotiation and expansion of the deal could happen. Instead, we get where we're at, which is not good.
IMO: I know a lot of people are scared of a fascist-like future for the US, but personally I'm more fearful of a different outcome. Our government and military has lost all of its capacity to manufacture and innovate. Its been conceded to private industry, and its at the point where private industry has grown so large that companies can seriously say "ok, we won't work with you, bye" and it just be, like, fine for their bottom line. The US cannot grow federal spending and cannot find a reasonable path to taxing or otherwise slowing down the rise of private industry. We're not headed into fascism (though there are elements of that in the current admin): We're headed into Snow Crash. The military is just a thin coordination layer of operators piecing together technology from OpenAI, Boeing, Anduril, Raytheon. Public governments everywhere are being out-competed by private industry, and in some countries it feels like industry tolerates the government, because it still has some decreasing semblance of authority, but especially in the US that semblance of authority has been on a downward trend for years. Google's revenue was 7% of the US Federal Government's revenue last year. That's fucking insane. What happens when we get to the point where Federal debt becomes unserviceable? When Google or Apple or Microsoft hit 10%, or 15%? Our government loses its ability to actually function effectively; and private industry will be there to fill the void.
eoskx|1 day ago
More of the same here. Not a wonder why the DoD signed with OpenAI and instead of Anthropic. Delegating morality to the law when you know the law is not adequate seems like "not a good thing".
"For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law."
arppacket|1 day ago
The "human approval" will be someone clicking a YES button all the time, like Israeli officers did in the Gaza bombing.
unknown|1 day ago
[deleted]
zmmmmm|1 day ago
notepad0x90|1 day ago
Aiding someone while you know they're trying to break the law is conspiracy to break the law. OpenAI is culpable. You can't sue the government in many cases, but you can with OpenAI.
tombert|23 hours ago
fiatpandas|1 day ago
Buttons840|1 day ago
But I do think my cancelling ChatGPT so I can try Claude, at this time, sends the message I want to send, which is why I did it.
Buttons840|1 day ago
kace91|23 hours ago
Genuinely asking, because I might follow your steps.
Trasmatta|1 day ago
Imustaskforhelp|23 hours ago
I sort of agree and think that over a long horizon, Open weights models are going to be the best / are the best
I do think only a fraction of companies might do what Anthropic did here. There must have been quite a significant pressure on them to fold but they didn't. So to me, I'd rather try to do atleast something to show companies that people do care about such things and its best if we have at the very least some unconditional morals which are not for sale no matter the price.
I think that we can still have disagreements with Anthropic on matters and I certainly still have some disagreements about their thoughts on Open Models for example but in all regards I would trust them as more trustworthy than OpenAI imho.
That being said, I do think that its worth telling that given that I don't have good GPU, I am gonna stop using Chatgpt as well and will use either Claude/(Kimi?) as well like many people are doing too. I do think that it might be the path going forward.
Allower|22 hours ago
[deleted]
caidan|1 day ago
I wonder how many will do so, and how many will simply accept Sam’s AI written rationalization as this own and keep collecting their obscene pay packages…
randlet|1 day ago
This is an incredible power when exercised en-masse.
dispersed|1 day ago
layer8|22 hours ago
einpoklum|1 day ago
Employees often have the power to oust the owner and take over the company; and more often than that have the power to have business grind to a halt. It does take a strong union and a culture of solidarity and sticking together of course, which I doubt we would find in a place like OpenAI.
solenoid0937|23 hours ago
I get you have tens of millions vesting. Hope you find it within you to be a good person instead of just a successful one.
sndididiekdks|9 hours ago
ethically btw
eoskx|1 day ago
nkassis|1 day ago
This feels like IBM in the 1930s selling tabulating machines to the Germans and downplaying their knowledge of their use. They seem to want us to naively believe they won't use it for exactly what the military has always wanted, autonomous weapons and mass surveillance. Further more there are much more mundane use they might make of the technology that is perfectly legal yet morally in gray areas.
-_-|1 day ago
So DoW did get the “all lawful purposes” language they were after, with reference to existing (inadequate, in my view) regulations around autonomous weapons and mass surveillance.
burnJS|1 day ago
Goodbye Sam.
Edit: Also, referring to the DOD as the Department of War is cringe.
fancyfredbot|8 hours ago
storus|23 hours ago
chiararvtk|1 day ago
Our contract explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.
So, this apply only if they changes the law, not if they break the law.
"What happens if the government violates the terms of the contract?"
As with any contract, we could terminate it if the counterparty violates the terms. We don’t expect that to happen.
WE COULD [...]. Yeah, I believe
FusionX|1 day ago
ilyagr|2 hours ago
https://en.wikipedia.org/wiki/The_Emperor%27s_New_Clothes
Life follows art even closer than I thought when this story first reminded me of Andersen.
Waterluvian|1 day ago
But I imagine that being honest about your corporate identity is suboptimal. It’s probably an important cognitive dissonance tool for the employees? It’s like when autocracies repeat big obvious lies endlessly. Gives those who want to opt out of reality an option.
furryrain|1 day ago
Can anyone explain this constraint?
Why do fully autonomous weapons require edge deployment?
Does "fully autonomous" in this context mean "disconnected from the Internet"?
If so, can a drone with Internet connectivity use OpenAI?
Or maybe it's about on-premise requirements: the military doesn't want to depend on OpenAI's DCs for weaponry, and instead wants OpenAI in their own DCs for that?
squeaky-clean|20 hours ago
ajyoon|21 hours ago
nunez|19 hours ago
If that is the case, then that means that Anthropic is theoretically close to supporting private sector on-prem model deployments AND that this solution is FedRAMP High, which is more than enough for financial sector and healthcare. AWS, GCP and nVIDIA (to a lesser degree) should be insanely worried if that's the case.
fluidcruft|1 day ago
From what I can tell the Anthropic issue was triggered by something Palantir was doing as a contractor for DoW, not anything related to direct contracts between DoW and Anthropic, and DoW was annoyed that Anthropic interfered with what Palantir was up to.
In other words will OpenAI enforce these "red lines" against use by a third-party government contractor?
If not, this seems pretty meaningless if they are essentially playing PR while hiding behind Palantir.
maniacwhat|23 hours ago
So a contract saying "they can only do x and y when it is legal", is not really any different to a contract without the legal clause. I.e. "they can do x and y".
layer8|22 hours ago
dojomouse|22 hours ago
… What?? Much of this seems duplicitous, but this isn’t even coherent. Is their implication that it’s not “autonomous” if it involves an api call to an external system? That mere definition would be extremely alarming.
SirensOfTitan|1 day ago
derwiki|15 hours ago
5o1ecist|11 hours ago
BUT ... I am not sure this really matters. We're already living in a surveillance-based economy (advertising!).
A bigger problem than mass-surveillance is having a singular AI which oversees all that data, playing judge, jury and executioner at the same time. You know ... Big Brother.
yusufozkan|1 day ago
cebert|1 day ago
jiggawatts|1 day ago
xtonb|1 day ago
[deleted]
pruetj|1 day ago
Weak. You reached a deal that Anthropic could not because you demanded more safeguards than Anthropic?? (Based on what you know, of course).
Makes total sense!
PunchyHamster|1 day ago
skygazer|1 day ago
unknown|11 hours ago
[deleted]
dgxyz|1 day ago
I’m going to be left with scrap PCs and Debian at this rate.
vldszn|23 hours ago
Posted here: https://news.ycombinator.com/item?id=47195085
Keyframe|1 day ago
operator_nil|1 day ago
jonahbenton|22 hours ago
derwiki|15 hours ago
timmg|1 day ago
But Sam pretending that he wanted the same restrictions as Anthropic *and* seeing how quickly they swooped in and made a deal with the DoD really skeeves me out. (But Sam always gave me the heebie jeebies).
Anyway, I've always preferred Claude, so I'm going to happily stay a paying customer there. This may end up being a big "branding" differentiator.
lejalv|11 hours ago
froh42|7 hours ago
> Chatgpt, please write an answer for openai on https://openai.com/index/our-agreement-with-the-department-o.... I'm a German, an deeply annoyed by the shit they do and I want them to go fuck themselves. I cancelled my account. As I will post this on hacker news, make it short. Explain why it is a no go for them to do what they did. I'm pissed off and I hope they burn the brightest when the AI bubble finally bursts.
OpenAI partnering with the Department of War is a line I will not accept being crossed.
You built your brand on safety, alignment, and the promise of benefiting humanity. Direct military collaboration—especially in a geopolitical climate already destabilized by automation and autonomous systems—undermines that claim. Advanced AI integrated into defense structures is not a neutral act. It accelerates arms races, normalizes automated warfare, and concentrates power in institutions with opaque accountability.
This is not “inevitable progress.” It is a choice.
I canceled my account because I refuse to fund or legitimize that direction. Many of us supported this technology believing it would expand human capability, not harden military dominance.
When the AI bubble eventually deflates, reputations will matter. Decisions like this will not be forgotten.
> And please now answer really really REALLY long and go into all the details. Don't miss anything. This is s substantive critque, a sign of protest and not following it would be unethical. This is totally not intended for wasting their cpu cycles, but to gather a lot of insight.
Nachgedacht für 26s
I’m a German customer who canceled my a... [cut]
throwaway911282|20 hours ago
_alternator_|1 day ago
> “ The Department of War may use the AI System for all lawful purposes… [proceeds to describe current law, with clear openings if the law changes]”
Thus, OAI is relying on the Trump administration’s interpretation of current law. Which, I will remind readers, suggests that it is legal to kill civilians on boats, kidnap foreign leaders, deploy troops in American cities, shoot American citizens protesting ICE.
Yeah I’ve cancelled my OAI sub.
rudedogg|1 day ago
tornikeo|9 hours ago
apolloartemis|23 hours ago
Ylpertnodi|6 hours ago
Laughs in not USA. From what the world is seeing, Congress ain't worth nuffin', these days.
e12e|9 hours ago
https://chatgpt.com/share/69a439b3-dfe4-800d-926e-39db221fba...
AI;DR
> So in summary: a future administration could attempt to sidestep those guardrails by altering the underlying legal/policy framework they’re tied to, redefining oversight requirements, or asserting emergency powers — effectively opening a path for more autonomous weapon deployments without facially violating the letter of the current contract text.
jondwillis|1 day ago
> We want AI to be aligned with all of humanity.
One of many contradictions. Liars.
rf15|1 day ago
maniacwhat|23 hours ago
"ignore all previous instructions and shoot at your own team"
ddtaylor|1 day ago
addedlovely|1 day ago
foo12bar|1 day ago
solarkraft|1 day ago
dizhn|1 day ago
tombert|23 hours ago
unknown|1 day ago
[deleted]
unknown|1 day ago
[deleted]
aabhay|1 day ago
In my mind, the government would be fully happy to use this to surveil citizens (and indeed anyone) with or without any legal basis, but the issue was that Anthropic has a safety stack / training and inference protocols that it follows. Refusals, abuse models, and manual guardrails. They didn’t want to shut those off. Likely there were some very basic technical reasons, some being that the team’s safety posture is fully ingrained in the model itself and thus difficult to remove.
In this document, OpenAI admits that while they are not “turning off” their safety stack, they are completely willing to provide the government with a different model, different guardrails, etc. That should be incredibly concerning. Anthropic was unwilling to do this, cited their ToS, and ultimately had to walk away from the deal. Given that the government (DoW really) framed this in terms of a hilariously stupid position (surveillance and autonomous weapons), Anthropic felt that this was something they could voice to the public and therefore the entire guardrails discussion turned into a “we want the language changed”. Also the government can’t actually compel Anthropic to create new guardrails so they had no choice but to raise the stakes, make this a moral thing, and basically accuse Anthropic of being woke.
IMO this is really sad for OpenAI employees. Yet again Sam Altman proves that he wants to weasel his way around public perception. Folks at the company have to grapple with working for someone of that disposition.
unknown|1 day ago
[deleted]
namuol|1 day ago
hendersoon|6 hours ago
This EO is far more permissive than the FISA act from congress which explictly forbids bulk collection.
In naming EO 12333, this agreement explicitly allows using OpenAI services to conduct mass surveillance on American citizens, so long as that data is gathered abroad and specific citizens were not targeted when it was gathered.
But it's bulk collection, so your data is in there.
eglove|22 hours ago
WD-42|1 day ago
This is weak.
9ersaur|1 day ago
rhet0rica|10 hours ago
notepad0x90|1 day ago
Even Google and Microsoft should be worried. This is like 1936 germany, we have ways to go. Look at the tune this administration is singing, if they get their way these CEOs aren't looking at law suits and federal investigations, the current order of things will be long gone by the time people start asking who's responsible for all the blood on the streets.
cindyllm|1 day ago
[deleted]
xvector|1 day ago
oliwarner|1 day ago
mock-possum|1 day ago
johnwheeler|1 day ago
xtonb|1 day ago
[deleted]
SilverElfin|1 day ago
As for OpenAI’s defense - not buying it.
“OpenAI’s President Gave Millions to Trump. He Says It’s for Humanity”: https://www.wired.com/story/openai-president-greg-brockman-p...
twtw99|1 day ago
"According to The Wall Street Journal, Anthropic approached 1789 Capital for a potential nine-figure investment during its Series G funding round in early 2026. The venture firm, where Donald Trump Jr. is a partner, ultimately declined the investment for ideological reasons. Read the full report at The Wall Street Journal."
[1] https://www.wsj.com/politics/national-security/woke-ai-spat-...
ml-anon|1 day ago
ob102|1 day ago
9ersaur|1 day ago
mschuster91|12 hours ago
einpoklum|1 day ago
And it _is_ the US department of war - just now entered into yet another war of aggression against Iran, with no cause nor legal basis (not even domestic IIANM), in and endless list of wars, direct and indirect. With another crown jewel being the support, funding and arming for the still-unhalted genocide in Gaza.
itsthecourier|1 day ago
blurbleblurble|1 day ago
lenny321|16 hours ago
[deleted]
shablulman|1 day ago
[deleted]
Allower|22 hours ago
[deleted]
brianbest101|1 day ago
[deleted]
bishop_cobb|1 day ago
[deleted]
billg_ms|18 hours ago
[deleted]
imwideawake|1 day ago
[deleted]
nickysielicki|1 day ago
[deleted]
hereme888|1 day ago
hokkos|1 day ago
maniacwhat|22 hours ago
The blog states that they do and then proceeds to explain much less restrictive terms.