(no title)
lich_king | 1 day ago
So, with all respect, when someone tells me that the people they worked with were well-intentioned and driven by values, I take it with a grain of salt. Been there, said the same things, and then when the company needed to make tough calls, it all fell apart.
However, in this instance, it does seem that Anthropic is walking away from money. I think that, in itself, is a pretty strong signal that you might be right.
clutter55561|1 day ago
I’m on the camp “the world is so fucked up, take the good when you can find it”.
Beggars can’t be choosers when it comes to taking a stand against dictatorships.
abustamam|1 day ago
Not sure why it's controversial that they said no, regardless of the reasoning. Yeah there's a lot of marketing speak and things to cover their asses. Let's call them out on that later. Right now let's applaud them for doing the right thing.
FWIW I do not think they are the "good guys" (if I had a dollar for every company that had a policy of not being evil...). But they are certainly not siding with the bad guys here.
scrubs|1 day ago
For if you don't the next step is cynicism maximally operationlized: what you're not doing game/political BS to get ahead? What are you? A chump? An idiot?
That kind of stuff spreads like wild fire making corporate America ... something else to put it politely.
Doing the right thing has cost me big time here and there. I don't care. Simultaneously orgs are not all bad; thats another distortion we can do without.
twelvechairs|1 day ago
mkozlows|1 day ago
Like Google's support for the open web: They very sincerely did support it, they did a lot of good things for it. And then later, they decided that they didn't care as much. It was wrong to put your faith in them forever, but also wrong to treat that earlier sincerity as lies.
In this case, Anthropic was doing a good thing, and they got punished for it, and if you agree with their stand, you should take their side.
eucyclos|1 day ago
Davidzheng|1 day ago
Many of us remember that OpenAI was also started by people with strong personal values. Their charter said that they would not monetize after reaching AGI, their fiduciary duty is to humanity, and the non-profit board would curtail the ambitions of the for-profit incentives. Was this not also believed by a sizeable portion of the employees there at the time? And what is left of these values after the financial incentives grew?
The market forces from the huge economic upside of AI devalues individual values in two ways. It rewards those that choose whatever accelerates AI the most over any individuals who are more careful and act on individual values--the latter simply loses power in the long run until their virtue has no influence. As Anthropic says in their mission statements, it is not of much use to humanity to be virtuous if you are irrelevant. The latter, as is true for many technologies, is that economic prosperity is deeply linked to human welfare. And slowing or limiting progress leads to real immediate harm to the human population. And thus any government regulations which are against AI progress will always be unpopular, because those values which are arguing future harm of AIs is fighting against the values of saving people from diseases and starvation today.
ajam1507|1 day ago
The supply chain risk designation will be overturned in court, and the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers. Not to mention that giving in would mean they lose lots of their employees who would refuse to work under those terms. In this case, the principles are less than free.
Stratoscope|1 day ago
In fact, a friend heard about this and immediately signed up for a $200/year Claude Pro plan. This is someone who has been only a very occasional user of ChatGPT and never used Claude before.
I told my friend "You could just sign up for the free plan and upgrade after you try it out."
"No, I want to send them this tangible message of support right now!"
aoeusnth1|1 day ago
mech422|1 day ago
skissane|1 day ago
I'm honestly uncertain how the courts will rule. You could be right, but it isn't guaranteed. I think a judicial narrowing of it is more likely than a complete overturn.
OTOH, I think almost guaranteed it will be watered-down by the government. Because read expansively, it could force Microsoft and AWS to choose between stopping reselling Claude vs dropping the Pentagon as a customer. I don't think Hegseth actually wants to put them in that position – he probably honestly doesn't realise that's what he's potentially doing. In any event, Microsoft/AWS/etc's lobbyists will talk him out of it.
And the more the government waters it down, the greater the likelihood the courts will ultimately uphold it.
> and the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers.
Maybe. The problem is B2B/enterprise is arguably a much bigger market than B2C. And the US federal contracting ban may have a chilling effect on B2B firms who also do business with the federal government, who may worry that their use of Claude might have some negative impact on their ability to win US federal deals, and may view OpenAI/xAI (and maybe Google too) as safer options.
I guess the issue is nobody yet knows exactly how wide or narrow the US government is going to interpret their "ban on Anthropic". And even if they decide to interpret it relatively narrowly, there is always the risk they might shift to a broader reading in the future. Possibly, some of Anthropic's competitors may end up quietly lobbying behind the scenes for the Trump admin to adopt broader readings of it.
conartist6|1 day ago
Surely OpenAI cannot but notice that those values, held firmly, make you an enemy of the state?
mojoe|1 day ago
AznHisoka|1 day ago
cperciva|1 day ago
Indeed. If everything is a priority, nothing is a priority; you only know that something is a real priority when you get an answer to the question "what will you sacrifice for this".
rustystump|1 day ago
benny20twenty|1 day ago
[deleted]