top | item 34721126

(no title)

redml | 3 years ago

You would think all it really needs adjustable content moderation levels like search engines do with safe search. Leave it on the highest safety level and allow the user to adjust it accordingly.

But I'm guessing not everybody sees "AI generated text" the same way as accidentally returning porn, hate speech or something in a list of search results. Something I suppose feels more personal or deliberate about it.

discuss

order

badrabbit|3 years ago

The problem is liability. Unlike search engines, this is untested area (legally). A search engine presents content others created, as do social media companies, youtube,etc... while ChatGPT does use user generated content as input, it's output is heavily analyzed and processed so you can't really say it is serving you content but more like creating content based on some information it knows. So, in effect, it is OpenAI that is liable for what ChatGPT says.

Everyone is saying hate speech , so, a racially offensive remark by ChatGPT can mean civil suits for defamation, emotional damage,etc... even with user adjustable filters.

But I think it is even more serious if it told people how to commit a serious crime for example. Wouldn't OpenAI be a co-conspirator?

pjc50|3 years ago

Yeah - although the liability isn't really legal (that's yet to be properly explored and would probably be disclaimered), but social-reputational.

gowld|3 years ago

The Anarchist's Cookbook is legal in US.

mc32|3 years ago

Would it be accused of treason if it provided technical details to enemies on how to make {secret military tech}?