top | item 47201779

(no title)

cube00 | 1 day ago

From that same X thread: Our agreement with the Department of War upholds our redlines [1]

OpenAI has the same redlines as Anthopic based on Altman's statements [2]. However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?

[1]: https://xcancel.com/OpenAI/status/2027846013650932195#m

[2]: https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic...

discuss

order

AlexVranas|23 hours ago

OpenAI is playing games.

When Anthropic says they have red lines, they mean "We refuse to let you use our models for these ends, even if it means losing nearly a billion dollars in business."

When OpenAI says they have red lines, they mean "We are going to let the DoD do whatever the hell they want, but we will shake our fist at them while they do it."

That's why they got the contract. The DoD was clear about what they wanted, and OpenAI wasn't going to get anywhere without agreeing to that. They're about as transparent as Mac from It's Always Sunny in Philadelphia when he's telling everyone he's playing both sides.

trjordan|5 hours ago

"Red lines" does not mean some philosophical line they will not cross.

"Redlines" are edits to a contract, sent by lawyers to the other party they're negotiating with. They show up in Word's Track Changes mode as red strikethrough for deleted content.

They are negotiating the specifics of a contract, and Anthropic's contract was overly limiting to the DoD, whereas OpenAI's was not.

germandiago|11 hours ago

I am going to stop using ChatGPT immediately.

bambax|13 hours ago

> but we will shake our fist at them while they do it

Not even that. They are not shaking anything except their booty.

docmars|9 hours ago

Personally I think OpenAI is intending to infiltrate their political enemy's stronghold and look for ways to leak data to "get Trump" as per usual.

They'll say "oops" and then we'll spend the next few years listening to pointless Congressional hearings.

gchamonlive|16 hours ago

Why DoD and not DoW?

ghm2199|9 hours ago

Isn't it simpler to say that anthropic adopted a values based use approach and openai adopted a legal one?

Or In other words you can get to decide two ways to use a lucrative property:

1. designate it private and draft usage of how you allow to use it, per your value system(as long as values don't violate any laws)

2. In face of competition, give up some values and agree to a legal definition of use that favors you.

jrochkind1|18 hours ago

Anthropic wanted to put those restrictions in the contract. OpenAI said they'll just trust their own "guardrails" in the training, they don't need it in the contract. (I'm not sure I believe "guardrails" can prevent mass surveillance of civilians?)

Very gracious of OpenAI to say Anthropic should not be designated a supply chain risk after sniping their $200 million contract by being willing to contractually let the government do whatever they like without restrictions.

lostnground|15 hours ago

It cannot really oversee this. If you can decompose a problem into individual steps that are not, in themselves, against the agent's alignment, it's certainly possible to have the aggregate do so.

Symmetry|11 hours ago

How confident are we, with OpenAI's recent very large contribution to Trump's PAC, that OpenAI wasn't working to get Anthropic designated a supply chain risk behind the scenes? I don't want to be too paranoid here but given Sam's reputation and cui bono I don't think we can really rule this out either.

Barbing|15 hours ago

>(I'm not sure I believe "guardrails" can prevent mass surveillance of civilians?)

Right, wouldn't they need a moderation layer that could, for example, fire if it analyzed & labeled too many banal English conversations?

They really gave training credit for guardtrails? I mean, it could perhaps reject prompts about designing social credit systems sometimes, but I can't imagine realistic mitigations to mass domestic surveillance generally.

Wowfunhappy|23 hours ago

> However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?

The current administration is so incompetent that I find this perfectly believable.

I imagine the government signed with OpenAI in order to spite Anthropic. The terms wouldn't actually matter that much if the purpose was petty revenge.

I don't know if that's actually what happened here, I just find it plausible.

el_benhameen|21 hours ago

Absolutely incompetent, but I don’t think that’s the cause here. I think Anthropic’s sin was publicly challenging the administration. They’re huge on optics. You can get away with anything as long as you praise and bow in public.

randall|23 hours ago

same. this is about losing a negotiation and saving face / exacting revenge.

jellyroll42|22 hours ago

Sam Altman has no scruples. Dark Triad personality. No reason to believe anything he says.

jacquesm|22 hours ago

The same goes for anybody still working at OpenAI past Monday morning 9 am.

_heimdall|21 hours ago

Anthropic demanded defining the redlines. OpenAI and others are hiding behind the veil of what is "lawful use" today. They aren't defining their own redlines and are ignoring the executive branch's authority to change what is "lawful" tomorrow.

Nevermark|16 hours ago

Or the increasing impunity all three branches of government are giving themselves with regard to bad faith interpretations of the law, and a lack of government accountability when they color outside the lines.

Much of the impunity is now Supreme Court settled law.

We see clearly unconstitutional behavior every day, and there is no systematic, timely or effective, push back from any constitutionally enabled oversight.

Checks and balances don't work, when players are more loyal to party than branch or constitution.

Unfortunately, there are no constitutional checks, balances or limits on single party control. And single party control negates all the others. That one party can majority control all three branches is a serious failure mode in political incentives (bipartisanship is highly disincentivized) and governance (even temporary or shaky full control incentivizes making full control permanent over all other "policies").

Until the last few decades, diverse concerns across states avoided tight centralization within parties, and therefore across branches.

Symmetry|11 hours ago

Anthropic's whole worry with mass surveillance was that current law is too loose in the age of AI to offer enough restraint.

ChildOfChaos|11 hours ago

Brockman donating $25 million dollars in January might have a little something to do with it..

827a|23 hours ago

My understanding of the difference, influenced mostly by consuming too many anonymous tweets on the matter over the past day so could be entirely incorrect, is: Anthropic wanted control of a kill switch actively in the loop to stop usage that went against the terms of use (maybe this is a system prompt-level thing that stops it, maybe monitoring systems, humans with this authority, etc). OpenAI's position was more like "if you break the contract, the contract is over" without going so far as to say they'd immediately stop service (maybe there's an offboarding period, transition of service, etc).

Nevermark|23 hours ago

> more stringent safeguards than previous agreements, including Anthropic's.

Except they are not "more stringent".

Sam Altman is being brazen to say that.

In their own agreement as Altman relays:

> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control

> any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing

> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives

> The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

I don't think their take is completely unreasonable, but it doesn't come close to Anthropic's stance. They are not putting their neck out to hold back any abuse - despite many of their employees requesting a joint stand with Anthropic.

Their wording gives the DoD carte blanch to do anything it wants, as long as they adopt a rationale that they are obeying the law. That is already the status quo. And we know how that goes.

In other words, no OpenAI restriction at all.

That is not at all comparable to a requirement the DoD agree not to do certain things (with Anthropic's AI), regardless of legal "interpretation" fig leaves. Which makes Anthropic's position much "more stringent". And a rare and significant pushback against governmental AI abuse.

(Altman has a reputation for being a Slippery Sam. We can each decide for ourselves if there is evidence of that here.)

clhodapp|22 hours ago

Yep. It's the difference between "Don't do these things, regardless of what the law says." and "Do whatever you want, but please follow your own laws while you do it".

As Paul Graham said, "Sam gets what he wants" and "He’s good at convincing people of things. He’s good at getting people to do what he wants." and "So if the only way Sam could succeed in life was by [something] succeeding, then [that thing] would succeed"

qmarchi|22 hours ago

Easy way to summarize it: "You're not allowed to do these things, except for all of the laws that allow you to do these things."

pear01|21 hours ago

Brings to mind the infamous line from Nixon:

"When the president does it, that means it is not illegal".

This was during the Frost/Nixon interviews, years after he had already resigned. Even after all that, he still believed this and was willing to say it into a camera to the American people. It is apparent many of the people pushing the excesses going on today in government share a shameless adherence to this creed.

fnordpiglet|20 hours ago

Each of those clauses have a DoD policy carve out as an exception which says basically they can do whatever they want if they want to do it, but won’t be able to if they don’t want to do it.

aardvarkr|21 hours ago

This is the same government caught spying on its citizens by Snowden so I don’t trust them at all.

stingraycharles|22 hours ago

This implies that OpenAI must build and release and maintain a model without any safeguards, which is probably the big win and maybe something Anthropic never wants to do.

lobochrome|18 hours ago

So you want OpenAI to create “laws”?

I for one do not want ai labs to designate what is legally ok to do.

I much prefer the demos to take care of that.

jmward01|17 hours ago

I have never used AI to generate an answer for HN but just this once I thought it would be good to hit ChatGPT specifically and ask it for 'a list of times Sam Altman has gone against his word.' Here was its response:

Shift from Nonprofit Mission to For-Profit Orientation – OpenAI was founded as a nonprofit with a charter focused on “benefit to humanity,” but under Altman it created a capped-profit subsidiary, accepted large investments (e.g., from Microsoft), and critics (including Elon Musk in a 2024 lawsuit) argue this departed from that original mission. A federal judge allowed Musk’s claim that Altman and OpenAI broke promises about nonprofit governance to proceed to trial.

Nonprofit Control Reorganization Drama (2023) – In November 2023, the original nonprofit board cited a lack of transparency and confidence in Altman’s candor as a reason for firing him. He was reinstated days later after investor and employee pressure, highlighting internal conflict over governance and communication.

Dust-Up Over Military Usage Policies – OpenAI initially had explicit public policies restricting AI use in “military and warfare” contexts, but those clauses were reportedly removed quietly in 2024, allowing the company to pursue Department of Defense contracts — a turnaround from earlier language that appeared to preclude such use.

Statements on Pentagon Deal vs. Prior Positioning – In early 2026, Altman publicly said OpenAI shared safety “red lines” (e.g., prohibiting mass surveillance and autonomous weapons) similar to some competitors, but hours later OpenAI signed a deal to deploy its models on classified military networks, leading critics to argue this contradicts earlier positioning on limits for military use.

Regulation Stance Shifts in Congressional Testimony – Altman has advocated for strong regulation of AI in some public settings but in later congressional hearings opposed specific regulatory requirements (like mandatory pre-deployment vetting), aligning more with industry concerns about overregulation — a shift in tone compared with earlier support of regulatory frameworks.

spiderice|21 hours ago

That seems exactly what it should be. The United States military should be able to do what the law allows. If we don't think they should be allowed to do something, we should pass laws. Not rely on the goodness of Sam Altman.

matchagaucho|3 hours ago

The OpenAI PR implies that Anthropic had a "usage-policy" clause with no actual enforcement.

Whereas OpenAI won their contract on the ability to operationally enforce the red lines with their cloud-only deployment model.

kelnos|19 hours ago

The red lines are not the same.

Anthropic refuses to allow their models to be used for any mass surveillance or fully-automated weapons systems.

OpenAI only requires that the DoD follows existing law/regulation when it comes to those uses.

Unfortunately, existing law is more permissive than Anthropic would have been.

bastawhiz|22 hours ago

Altman donated a million to the Trump inauguration fund. Brockman is the largest private maga donor. You don't have to be a rocket scientist to understand what's going on here.

bmitc|17 hours ago

Agreed. These guys are traitors.

JumpCrisscross|3 hours ago

> based on Altman's statements

The dude is notorious for being a compulsive liar, even if supporters have to admit as much.

skrebbel|12 hours ago

It's called corruption.

gzread|13 hours ago

OpenAI donated $25,000,000 to Trump, that's why. Now people are cancelling ChatGPT subscriptions, so he needs to walk back the optics.

rootusrootus|1 day ago

Exactly. What are we not being told? There is some missing element in the agreement, or the reasoning for the action against Anthropic is unrelated to the agreement.

fc417fc802|20 hours ago

The demand was that Anthropic permit any use that complied with the law. They refused. OpenAI claims to have the same red lines but in reality has agreed to permit anything that complies with the law.

In other words OpenAI is intentionally attempting to mislead the public. (At least AFAICT.)

moogly|23 hours ago

Turns out both companies ran the agreement through their legal departments (Claude and GPT), and one of them did a poor summary. I (think I) jest, but this is probably going to be a thing as more and more companies use LLMs for legal work.

snickerbockers|23 hours ago

One nuance I've noticed: the statement from Anthropic specifically stated the use of their products for these purposes was not included in the contract with DoD but it stops short of saying it was prohibited by the contract.

Maybe it's just a weak choice of words in anthropic's statement, but the way I read it I get the impression that anthropic is assuming they retain discretion over how their products are used for any purposes not outlined in the contract, while the DoD sees it more along the lines of a traditional sale in which the seller relinquishes all rights to the product by default, and has to enumerate any rights over the product they will retain in the contract.

generic92034|23 hours ago

Punish one, teach a hundred (companies).

micromacrofoot|23 hours ago

president of openai donated $25 mil to trump last month, openai uses oracle services (larry ellison), kushners have lots invested in openai, altman is pals with peter thiel

yoyohello13|23 hours ago

The reasoning is one company is ‘left and woke’ the other gives money to Trump.

FrustratedMonky|3 hours ago

They can say it on X. But will they refuse to do work?

emsign|13 hours ago

They are obviously lying. OpenAI is not to be trusted anymore.

Analemma_|23 hours ago

It's probably a combination of "Altman is simply lying" (as he has been repeatedly known to do) and "the redlines in OpenAI's contract are 'mass surveillance' and 'autonomous killbot' as defined by the government and not the vendor". Which, of course, effectively means they don't exist.

softwaredoug|23 hours ago

The difference is Anthropic wants contractual limitations on usage, explicitly spelling out cases of Mass Surveillance.

OpenAI has more of an understanding that the technology will follow the law.

There may not be explicit laws about the cases Anthropic wanted to limit. Or at least it’s open for judicial interpretation.

The actual solution is Congress should stop being feckless and imbecilic about technology and create actual laws here.

scarmig|22 hours ago

Between Anthropic, the military, and Congress, I have the least faith in Congress to make knowledgeable policy around tech.

slibhb|22 hours ago

It's almost like the Trump administration wanted to switch providers and this whole debate over red lines was a pretext. With this administration, decisions often come down to money. There are already reports that Brockman and Altman have either donated or promised large sums of money to Trump/Trump super pacs

yeahforsureman|8 hours ago

Can't recall the source right now (it would've been on one of the several podcasts I listened to on Friday I think), but there's a story/rumor to the effect that at some point during Claude's earlier deployment at the Pentagon — might've well been in the context of the Venezuela/Maduro operation — someone at Anthropic had in one way or another flagged some kind of legal(ity) concerns regarding the relevant operation (and/or perhaps Anthropic's role in it) with Palantir, who was maintaining the Claude deployments for the DoD. The story goes that after Palantir had then relayed this information further to DoD, Hegseth had this major fit over how Anthropic's hippie-ass North California woke bros should have no say in matters relating to national security, that of Hegseth's "warfighters" or whatever, etc...

Also, in the latest Hard Fork episode, Casey or Kevin mentions how the DoD undersecretary in charge of this contract doesn't apparently get along with or even pretty much hates Amodei for some reason. I think this might be the same undersecretary dude who actively commented the whole contract term controversy on X yesterday. Too bad I can't recall his name either.

gigatexal|5 hours ago

Exactly. This is very shady. Too many openAI investors in Trump’s orbit. And it could be that openAI will say it’s their policy but whereas Anthropic wanted oversight that their redlines were enforced OpenAI I think will just turn a blind eye. It’s double speak. It’s disingenuous. It’s the kind of business play Trump Likes because it’s nefarious and screws someone over like Trump’s very delayed if paid at all contractors and staff.