top | item 38526738

(no title)

jal278 | 2 years ago

I don't get this line of logic -- of course software has safety implications, because people use it for things in the real world. It isn't "math' that is cleanly separable from the rest of humanity; its training data comes from humanity, and it will be used towards human goals. AI is entangled with the rest of human dealings.

Whether AI poses existential threats for us or not, I'm open to either direction, but that the experts (e.g. Hinton, LeCun) are divided is reason enough to be concerned.

discuss

order

haltist|2 years ago

The way safety is handled in real world situations is through legal and monetary incentives. If the tanker you are driving to the gas station blows up then people get fired (no pun intended) and face legal repercussions. This is the case for anything that must operate in the real world. Safety is defined and then legally enforced. AI safety is no different, if an AI system makes a mistake then the operators of that system must be held liable. That's it, everything else about extinction and other sci-fi plots has no bearing on how these systems should be deployed and managed.

I have no idea what people talk about when they say LLMs must be safe. It generates words, what exactly about words is unsafe?