top | item 47204893

(no title)

rustyhancock | 23 hours ago

I know this is necessarily a very unpopular opinion however.

I think HN in particular as a crowd are very vulnerable to the halo effect and group think when it comes to Anthropic.

Even being generous they are only very minimally a "better actor" than OpenAI.

However, we are so enthralled by their product that we tend to let the view bleed over to their ethics.

Saying we want out tools used in line with the US constitution within the US on one particular point. Is hardly a high moral bar, it's self preservation.

All Anthropic have said is:

1. No mass domestic surveillance of Americans.

2. No fully autonomous lethal weapons yet.

My goodness that's what passes for a high moral standard? Really anything that doesn't hit those very carefully worded points is not "evil"?

discuss

order

JauntyHatAngle|22 hours ago

Lets generalise a bit more here - every company at any time could completely heel-turn and do awful things. Even my favourite private companies (e.g. Valve) have done things that I would consider evil.

However, I would think I'm not alone in that I'm generally wanting to do good while also wanting convenience, I know that really every bit of consumption I do is probably negative in some ways, and there is no real "apolitical" action anyone can take.

But can't I at least get annoyed and take my money somewhere else for the short amount of time another company is doing it better?

Yes, if openAI suddenly leaps forwards with codex and pounds anthropic into the dust, I'll likely switch back despite my moral grievances, but in a situation where I can get mildly motivated to jump over for something that - to me - seems like a better morality without much punishment to me, I'll do it.

bluGill|20 hours ago

There are no universial morals. Anything - everything you think is evil some culture (possibly in history) thinks is good). I can't even think of something good that I'm confident everyone would agree is good.

there are some people (companies are run by people) that are so bad I boycott them. Most bad I treat like society cannot work without accepting them anyway.

earthnail|22 hours ago

Well, they did stand up to the US administration and lost a lot of money in the process. That takes courage. They clearly were being bullied into compliance, and they stood their ground.

You can see the significance of this is you look at German Nazi history. If more companies had stood up to the administration, the Nazi state would have been significantly harder to build.

In my opinion, what Anthropic did is not a small thing at all.

rustyhancock|21 hours ago

The comment I replied to said that they believed OpenAI would allow "AGI to be used for truly evil purposes".

By contrast Anthropic wouldn't? Yet Anthropics stance is only two narrow restrictions. As I said are those two things the only evil things possible?

If not, why is it that people on HN think Anthropic would not allow evil usage?

My hypothesis is a halo effect. We are so enthralled by Claudes performance that some struggle to rationally assess what Anthropic has actually done.

Yes it's no small thing to say no to the Trump administration but that does not mean they haven't said Yes to otherwise facilitated other evils.

In fact to me the statements from Anthropic seem to make clear they are okay with many evils.

jacquesm|21 hours ago

It's not high. But it is higher.

rustyhancock|21 hours ago

We'll take anything we can right now. I agree.

Although we shouldn't let that mean we misjudge what we are actually getting.

ekianjo|20 hours ago

Let's not forget they also lobby to forbid models from China and pretend that distillation is stealing. but somehow just because they said no to two points the majority of HN folks think them as virtuous.