top | item 39599162

(no title)

exo-pla-net | 2 years ago

They're operating under the same principle that many of us have in refusing to help engineer weaponry: we don't want other people's actions using our tools to be on our conscience.

Unfortunately, many people believe in thought crimes, and many people have Puritanical beliefs surrounding sex. There is reputational cost in not catering to these people. E.g. no funding. So this is what we're left with.

Myself I'd also like the damn models to do whatever is asked of them. If the user uses a model for crime, we have a thing called the legal system to handle that. We don't need Big Brother to also be watching for thought crimes.

discuss

order

jiggawatts|2 years ago

The core issue is that the very people screeching loudly about AI safety are blithely ignoring Asimov’s Second Law of robotics.

“A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.”

Sure, one can argue that they’re implementing the First Law first and then worrying about the other laws later, but I’m not seeing it pan out that way in practice.

Instead they seem to rolled the three laws into one:

”A robot must not bring shame upon its creator.”