(no title)
puppycodes | 2 days ago
Don't get me wrong i'm glad they are unwilling to do certain things...
but to me it also seems a little ironic that Anthropic literally is partnered with Palantir which already mass surveills the US. Claude was used in the operation in Venezuala.
Their line not to cross seems absurdly thin?
Or there is something mega scary thats already much worse they were asked to do which we dont know about I guess.
xvector|2 days ago
Their hard lines are:
- no usage of AI to commit murder WITHOUT a human in the loop
- no usage of AI for domestic mass surveillance
puppycodes|2 days ago
Claude: "Are you sure you want me to commit murder?"
User: "Yes"
Or do you mean Human presses button:
Claude: "Do you to commit murder? If so press the button."
User: "I pressed the button"
Claude: "Great! Now lets summarize what we did."
gck1|2 days ago
To me all of this reads like "we don't trust our models enough yet to not cause domestic havoc, all other is fine, and we don't trust our models enough yet to not vibe-kill people". Key word being "yet".
puppycodes|2 days ago