Respectfully, it's very hard to see how anyone could look at what just happened and come to the conclusion that one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that. Either the terms are looser, they're not going to be enforced, or there's another reason for the loud attempt to blacklist Anthropic. It's very difficult to see how you could take this at face value in any case. If it is loose terms or a wink agreement to not check in on enforcement you're never going to be told that. We can imagine other scenerios where the terms stated were not the real reason for the blacklisting, but it's a real struggle (at least for me) to find an explanation for this deal that doesn't paint OpenAI in a very ethically questionable light.
Rebuff5007|1 day ago
I think what you are missing is their annual comp with two commas in it.
brandall10|1 day ago
the_real_cher|1 day ago
tmpz22|1 day ago
unknown|1 day ago
[deleted]
lazide|1 day ago
Shit, I wonder if I still have any of those ‘tres commas club’ t-shirts lying around?
skepticATX|1 day ago
ZeroGravitas|1 day ago
monooso|1 day ago
khazhoux|1 day ago
tedsanders|1 day ago
az226|1 day ago
edoceo|1 day ago
DavidSJ|1 day ago
KaiserPro|1 day ago
"we will comply with US law" The problem is, the US government does not actually comply with US law.
plandis|1 day ago
unknown|1 day ago
[deleted]
unknown|1 day ago
[deleted]
readitalready|1 day ago
onion2k|1 day ago
chrisfosterelli|1 day ago
1. Department of War broadly uses Anthropic for general purposes
2. Minority interests in the Department of War would like to apply it to mass surveillance and/or autonomous weapons
3. Anthropic disagrees and it escalates
4. Anthropic goes public criticizing the whole Department of War
5. Trump sees a political reason to make an example of Anthropic and bans them
6. The entirety of the Department of War now has no AI for anything
7. Department of War makes agreement with another organization
If there was only a minority interest at the department of war to develop mass surveillance / autonomous weapons or it was seen as an unproven use case / unknown value compared to the more proven value from the rest of their organizational use of it, it would make sense that they'd be 1) in practice willing to agree to compromise on this, 2) now unable to do so with Anthropic in specific because of the political kerfuffle.
I imagine they'd rather not compromise, but if none of the AI companies are going to offer them it then there's only so much you can do as a short term strategy.
juggle-anyhow|1 day ago
pbhjpbhj|1 day ago
Like, they haven't paid me a bribe? That seems to be the only "politics" at play in Trumps head.
FrustratedMonky|1 day ago
But man, this blew up pretty fast for a miss-understanding in some negotiation. Something must have been said in those meetings to make anthropic go public.
nektro|5 hours ago
to be clear i think your assessment of this situation is likely, but it could also be the case that pete and co likes sam more than they do dario.
baconner|4 hours ago
Lots of responses below give the likely real reasons most of which are probably true in part, but my opinion is it's the primary reason all who is in and who is out decisions are made by the trump administration - fealty. Skills, value brought, qualifications, etc. none of that matter above passing frequent loyalty tests, appealing to ego, bribes (sorry, i mean donations). Imagine thinking "hey, we'll work towards fully autonomous killbots because our adversaries will get them too but the tech isn't strong enough to allow them loose yet" or "yes you can use our ai for your panopticon surveillance, but just not on our own citizens because that is illegal" are lefty woke stances but here we are. Dario failed the loyalty test, as anyone rational would.
DennisP|1 day ago
MattyRad|1 day ago
manmal|1 day ago
JumpCrisscross|1 day ago
One of them needs to be investigated for corruption in the next few years. I’d have to assume anyone senior at OpenAI is negotiating indemnities for this.
gzread|15 hours ago
cowsandmilk|1 day ago
Never discount the possibility of Hegseth being petty and doing the OpenAI deal with the same terms to imply to the world that Anthropic is being unreasonable because another company signed a deal with him.
addandsubtract|1 day ago
willis936|1 day ago
This one is very easy. Trump has a well established pattern of making a loud statement to make it appear he didn't lose, even when he did.
az226|1 day ago
jdiaz97|1 day ago
kotaKat|1 day ago
spongebobstoes|1 day ago
openai can deploy safety systems of their own making
from the military perspective this is preferable because they just use the tool -- if it works, it works, and if it doesn't, they'll use another one. with the anthropic model the military needs a legal opinion before they can use the tool, or they might misuse it by accident
this is also preferable if you think the government is untrustworthy. an untrustworthy government may not obey the contract, but they will have a hard time subverting safety systems that openai builds or trains into the model
losvedir|1 day ago
nawgz|1 day ago
mpalmer|1 day ago
- When has any AI company shipped "safeguards" that aren't trivially bypassed by mid bloggers? Just one example would be fine.
- The conventional wisdom is that OAI's R&D (including safety) is significantly behind Anthropic's.
- OpenAI is constantly starved for funding. They don't make money. They have every incentive to say yes to a deal that entrenches them into govt systems, regardless of the externalities
topheroo|1 day ago
RandomTisk|1 day ago
pnut|1 day ago
Speaking to people's better angels as if it has a chance of influencing Trumps behaviour is a fool's errand. It's not derangement. His word is worthless.
ukblewis|1 day ago
[deleted]
johnbellone|1 day ago