Step 1: We believe the enemy is a monster who does [terrible act].
Step 2: To counteract this, we must do [terrible act].
Result: we maintain we are the "good guys" because we were "forced" into it by their presumed behavior.
He who fights with imagined monsters should look to it that he himself does not become a monster. And if you gaze long into an abyss, the abyss also gazes into you.
That quote is 140 years old. Is that enough time to heed it?
Nobody was advocating for zero AI in the military - certainly not Anthropic. They were fine with all lawful US military use cases except for two: the mass domestic surveillance of Americans and fully autonomous weapons. Whether you agree or disagree with their particular red lines, that's quite far from them trying to keep their product out of the military.
Nobody is complaining about the government not giving Anthropic a contract. It’s about the unprecedented and outrageous threats to destroy their business if they don’t provide the government with what they demand. There is no national supply risk from Boeing using Claud Code just beacause Anthropic won’t agree to domestically-surviving killbots. The government’s behaviour is overwhelmingly malevolent and terrifying.
kristopolous|2 days ago
Step 1: We believe the enemy is a monster who does [terrible act].
Step 2: To counteract this, we must do [terrible act].
Result: we maintain we are the "good guys" because we were "forced" into it by their presumed behavior.
He who fights with imagined monsters should look to it that he himself does not become a monster. And if you gaze long into an abyss, the abyss also gazes into you.
That quote is 140 years old. Is that enough time to heed it?
m3kw9|9 hours ago
jkaplowitz|2 days ago
djdndbdnfd|2 days ago
behringer|2 days ago
spiderice|2 days ago