top | item 47191302

(no title)

baconner | 1 day ago

Respectfully, it's very hard to see how anyone could look at what just happened and come to the conclusion that one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that. Either the terms are looser, they're not going to be enforced, or there's another reason for the loud attempt to blacklist Anthropic. It's very difficult to see how you could take this at face value in any case. If it is loose terms or a wink agreement to not check in on enforcement you're never going to be told that. We can imagine other scenerios where the terms stated were not the real reason for the blacklisting, but it's a real struggle (at least for me) to find an explanation for this deal that doesn't paint OpenAI in a very ethically questionable light.

discuss

order

Rebuff5007|1 day ago

> it's very hard to see how anyone could look at what just happened

I think what you are missing is their annual comp with two commas in it.

brandall10|1 day ago

When the genius of Upton Sinclair and Russ Hanneman come together so eloquently.

the_real_cher|1 day ago

This, for that check theyll be building the autonomous robots themselves, saying "theyre food delivery robots, thats not a gun that a drink dispenser!"

tmpz22|1 day ago

Lets be real one comma is enough for most Americans to flee their own humanity.

lazide|1 day ago

Hey, with expected stock payout - tres commas!

Shit, I wonder if I still have any of those ‘tres commas club’ t-shirts lying around?

skepticATX|1 day ago

One explanation is that this is effectively a quid pro quo, given Brockman’s enormous financial support of the current president.

ZeroGravitas|1 day ago

Yep, theoretically it could just be oligarchic corruption and not institutional insanity at the highest levels of the government. What a reassuring relief it would be to believe that.

monooso|1 day ago

I agree with your assessment, but given the past behaviour of this administration I wouldn't be shocked to discover that the real reason is "petulance".

khazhoux|1 day ago

It’s obvious retaliation, and will be struck down by the courts.

tedsanders|1 day ago

I agree it makes little sense, and I think if all players were rational it never would have played out this way. My understanding is that there are other reasons (i.e., beyond differing red lines) that made the OpenAI deal more palatable, but unfortunately the information shared with me has not been made public so I won't comment on specifics. I know that's unsatisfying, but I hope it serves as some very mild evidence that it's not all a big fat lie.

az226|1 day ago

Your ballooned unvested equity package is preventing you from seeing the difference between “our offering/deal is better” and “designated supply chain risk and threatening all companies who do business with the government to stop using Anthropic or will be similarly dropped” (which is well past what the designation limits). It’s easier being honest.

edoceo|1 day ago

Friend, this reads like that situation where your paycheck prevents you from seeing clearly - I forget the exact quote. Sam doesn't play a straight game and neither does the administration - there are more than a few examples.

DavidSJ|1 day ago

OpenAI should not be agreeing to any contract with DOD under these circumstances of Anthropic being falsely labeled a supply chain risk.

KaiserPro|1 day ago

The problem is, the vague safeguards are not worth anything.

"we will comply with US law" The problem is, the US government does not actually comply with US law.

plandis|1 day ago

That’s not evidence. You’re effectively saying “trust me bro” without a shred of proof to backup your claims.

readitalready|1 day ago

As an OpenAI employee, quitting wouldn't be a problem, as you have a much higher chance of being successful after quitting than anyone else. You could go to any VC and they would fund you.

onion2k|1 day ago

This isn't even close to true. VCs aren't silly, and it's not the 2010-2015 days of free money any more. Having a big company on your resume is not enough to land your seed round. You need a product, traction, and real money revenue in most cases.

chrisfosterelli|1 day ago

I agree with what you're saying, but given the egos involved in the current admin there's a practical interpretation:

1. Department of War broadly uses Anthropic for general purposes

2. Minority interests in the Department of War would like to apply it to mass surveillance and/or autonomous weapons

3. Anthropic disagrees and it escalates

4. Anthropic goes public criticizing the whole Department of War

5. Trump sees a political reason to make an example of Anthropic and bans them

6. The entirety of the Department of War now has no AI for anything

7. Department of War makes agreement with another organization

If there was only a minority interest at the department of war to develop mass surveillance / autonomous weapons or it was seen as an unproven use case / unknown value compared to the more proven value from the rest of their organizational use of it, it would make sense that they'd be 1) in practice willing to agree to compromise on this, 2) now unable to do so with Anthropic in specific because of the political kerfuffle.

I imagine they'd rather not compromise, but if none of the AI companies are going to offer them it then there's only so much you can do as a short term strategy.

juggle-anyhow|1 day ago

Well at least we know now that the department of war is less capable than before. All because the big man shit his pants while Anthropic was in view.

pbhjpbhj|1 day ago

>5. Trump sees a political reason

Like, they haven't paid me a bribe? That seems to be the only "politics" at play in Trumps head.

FrustratedMonky|1 day ago

That is pretty optimistic, i hope it is true, and just a miss-understanding.

But man, this blew up pretty fast for a miss-understanding in some negotiation. Something must have been said in those meetings to make anthropic go public.

nektro|5 hours ago

> Respectfully, it's very hard to see how anyone could look at what just happened and come to the conclusion that one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that.

to be clear i think your assessment of this situation is likely, but it could also be the case that pete and co likes sam more than they do dario.

baconner|4 hours ago

I was trying to make no particular call on the actual reason aside from pointing at how obviously not the real story and false the statements made so far are. What a knot you have to tie yourself into to seek out an explanation where OpenAI has not made an ethical compromise to stay in the game here. I can stretch and think of some ways but they are far from the simplest explanation.

Lots of responses below give the likely real reasons most of which are probably true in part, but my opinion is it's the primary reason all who is in and who is out decisions are made by the trump administration - fealty. Skills, value brought, qualifications, etc. none of that matter above passing frequent loyalty tests, appealing to ego, bribes (sorry, i mean donations). Imagine thinking "hey, we'll work towards fully autonomous killbots because our adversaries will get them too but the tech isn't strong enough to allow them loose yet" or "yes you can use our ai for your panopticon surveillance, but just not on our own citizens because that is illegal" are lefty woke stances but here we are. Dario failed the loyalty test, as anyone rational would.

DennisP|1 day ago

And unless GP has a security clearance, they can't know for sure what OpenAI is allowing on classified networks.

MattyRad|1 day ago

Yeah, agreed. I probably wasn't going to delete my OpenAI account (ala the link that is also being upvoted on HN), it just seemed like a hassle vs ceasing to use OpenAI. But when the staff at OpenAI employ mental gymnastics, selective hearing, willful ignorance, or plain ignorance to justify compliance with manmade horrors, I think it's probably important to vote with our feet.

manmal|1 day ago

Are you saying that everything so far in this administration has been 100% rational?

JumpCrisscross|1 day ago

> while another agrees the the same terms that led to that

One of them needs to be investigated for corruption in the next few years. I’d have to assume anyone senior at OpenAI is negotiating indemnities for this.

gzread|15 hours ago

Surely the reason was the large sum of money?

cowsandmilk|1 day ago

> one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that

Never discount the possibility of Hegseth being petty and doing the OpenAI deal with the same terms to imply to the world that Anthropic is being unreasonable because another company signed a deal with him.

addandsubtract|1 day ago

Or corruption, in which Trump/Hegseth are getting a kickback from OpenAI, but giving the money to Anthropic would be "worthless" to them.

willis936|1 day ago

>or there's another reason for the loud attempt to blacklist Anthropic

This one is very easy. Trump has a well established pattern of making a loud statement to make it appear he didn't lose, even when he did.

az226|1 day ago

And Sam is a habitual liar.

jdiaz97|1 day ago

He literally just got community noted for lying. So much for a non-profit CEO or whatever it is now.

kotaKat|1 day ago

And an abuser, but they keep covering that one up.

spongebobstoes|1 day ago

anthropic has nothing but a contract to enforce what is appropriate usage of their models. there are no safety rails, they disabled their standard safety systems

openai can deploy safety systems of their own making

from the military perspective this is preferable because they just use the tool -- if it works, it works, and if it doesn't, they'll use another one. with the anthropic model the military needs a legal opinion before they can use the tool, or they might misuse it by accident

this is also preferable if you think the government is untrustworthy. an untrustworthy government may not obey the contract, but they will have a hard time subverting safety systems that openai builds or trains into the model

losvedir|1 day ago

Huh, that's an interesting and new perspective. I'd love to know what you mean by safety systems, and what OpenAI can do that Anthropic can't.

nawgz|1 day ago

Source?

mpalmer|1 day ago

This is entirely nonsense.

- When has any AI company shipped "safeguards" that aren't trivially bypassed by mid bloggers? Just one example would be fine.

- The conventional wisdom is that OAI's R&D (including safety) is significantly behind Anthropic's.

- OpenAI is constantly starved for funding. They don't make money. They have every incentive to say yes to a deal that entrenches them into govt systems, regardless of the externalities

topheroo|1 day ago

> Cope and cognitive dissonance

RandomTisk|1 day ago

There's a critical mass of Trump Derangement Syndrome in SV, as this site exemplifies almost daily. The amount of vitriol and hatred spewed here is not healthy, nor are those who spew it. It kills rational debate, nuance and leads to foolish choices like someone cutting off their nose to spite their face as the old saying goes.

pnut|1 day ago

The president of the United States sets the tone that hated without reason or explanation is the way the system works now. Belligerence and power are the currency.

Speaking to people's better angels as if it has a chance of influencing Trumps behaviour is a fool's errand. It's not derangement. His word is worthless.