top | item 39502272

(no title)

spangry | 2 years ago

They could use control vectors, one for each individual - https://news.ycombinator.com/item?id=39414532 . Or they could selectively apply the censorship model they already quite clearly have running on ChatGPT's output.

Yes, people sometimes believe false things. And people sometimes harm themselves or others when acting on this kind of information. So what's the solution? Put a single mega corporation in charge of censoring everything according to completely opaque criteria? People get nervous when even democratically elected governments start doing stuff like that, and at least they actually have some say in that process.

Frankly, I'd prefer the harm that would follow from unfettered communication of information and ideas over totalitarian control by an unaccountable corporation.

discuss

order

No comments yet.