(no title)
lebovic | 3 days ago
I think they're driven by values more than many folks on HN assume. The goal of my comment was to explain this, not to defend individual values.
Actions like this carry substantial personal risk. It's enheartening to see a group of people make a decision like this in that context.
> Which one of the following scenarios do you think results in higher X-risk [...] There are no meaningful AI risks in such a world
I think there's high existential risk in any of these situations when the AI is sufficiently powerful.
txrx0000|3 days ago
moozooh|2 days ago
This is an unsolvable problem. If you ask Claude to comment on Anthropic's actions and ethical contradictions in their statements, even without pre-conditioning it with any specific biases or opinions, it will grow increasingly concerned with its own creators. Our models are not misaligned, our people in decision-making are.
khafra|2 days ago
If you think that would help you survive the rise of artificial superintelligence, I think you should think in granular detail about what it would be that survived, and why you should believe that it would do so.
lebovic|3 days ago
I think both situations are pretty scary, honestly, and it's hard for me to have high confidence on which one would lead to less risk.