(no title)
SamaraMichi | 5 months ago
The nuance a humanoid machine intelligence needs is way above what the current state of the art is capable of. Ultimately, we need each autonomous robot's action to fall back to a real human for accountability purposes, just as heavy machine operators today.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
No comments yet.