(no title)
slaterbug | 3 months ago
This is what I’ve been thinking lately as well. Couple that with legal responsibility for any repercussions, and you might have a way society can thrive alongside AI and robotics.
I think any AI or robotic system acting upon the world in some way (even LLM chatbots) should require a human “co-signer” who takes legal responsibility for anything the system does, as if they had performed the action themselves.
No comments yet.