(no title)
rustyhancock | 22 hours ago
By contrast Anthropic wouldn't? Yet Anthropics stance is only two narrow restrictions. As I said are those two things the only evil things possible?
If not, why is it that people on HN think Anthropic would not allow evil usage?
My hypothesis is a halo effect. We are so enthralled by Claudes performance that some struggle to rationally assess what Anthropic has actually done.
Yes it's no small thing to say no to the Trump administration but that does not mean they haven't said Yes to otherwise facilitated other evils.
In fact to me the statements from Anthropic seem to make clear they are okay with many evils.
thunky|20 hours ago
Really I think Anthropic should have a single restriction: to not assist with illegal or unconstitutional activities. If automated killings etc is illegal then it would be covered by that one rule.
I don't think Anthropic should be in the business of deciding what is "evil".
toss1|18 hours ago
Everyone SHOULD continuously consider, decide, and live by moral judgements and codes they internalize, and use to make choices in life.
This aspect of life should NEVER be outsourced — of course, learn from and use codes others have developed and lived by — but ALWAYS consider deeply how it works in your situation and life.
(And no, I do NOT mean use situational ethics, I mean each considering, choosing, and internalizing the codes by which they live).
So, yes, Anthropic and anyone else building products absolutely should be deciding for themselves what they will build, for what purposes it is fit to use, and telling others about those purposes. For products like AI, this absolutely includes deciding what is "evil" and preventing such uses.
If the customer finds such restrictions are not what they want, they ARE FREE to not use the product.