It shouldn’t be allowed to respond with false information about a person being a child molester, and be unable to be corrected. If that’s not possible then it shouldn’t be legal.
It’s a bit like proposing to ban cars if someone drives one into a pedestrian isn’t it?
LLMs are computer programs, tools. They’re not sentient, they don’t know what’s true or false. The error is entirely the human who chooses to put more than trivial weight into its output.
Reminds me of the “talking to God” program in TempleOS. We are all Terry now.
OK, fine, we can solve this with the informational equivalent of the Proposition 65 notice: "this content is the result of an LLM. It may contain falsehoods or libels. You are prohibited from relying on this information, and if you republish it you assume liability for any falsehoods".
(remember the Java license clause about not using it for nuclear reactors?)
Maybe we need a Data Protection Act adjustment: before using an LLM, the individual entering the prompt needs to be registered as a data controller, and needs to secure the consent of all individuals whose names appear in the LLM data?
That comparison doesn’t fit the situation: Microsoft is liable for their LLMs just like they’re liable for their corporate vehicles. If their company cars crash into pedestrians they don’t get to say “oh, that just happens sometimes” and shrug it off.
zarzavat|1 year ago
LLMs are computer programs, tools. They’re not sentient, they don’t know what’s true or false. The error is entirely the human who chooses to put more than trivial weight into its output.
Reminds me of the “talking to God” program in TempleOS. We are all Terry now.
vundercind|1 year ago
Running a service that prints dangerous lies about people should come with so much liability it’s economically infeasible, though.
Then again, I have similar feelings about ad networks and social media “feeds”.
pjc50|1 year ago
(remember the Java license clause about not using it for nuclear reactors?)
Maybe we need a Data Protection Act adjustment: before using an LLM, the individual entering the prompt needs to be registered as a data controller, and needs to secure the consent of all individuals whose names appear in the LLM data?
pavel_lishin|1 year ago
It's closer to banning false product labeling, which we do.
acdha|1 year ago
MissTake|1 year ago