For some reason, we assume what comes out of a computer is more trustworthy than what people say. We think computers are transparent, reliable, idempotent and don't have an agenda. Even more so if we call it "intelligent"...
But ChatGPT is a bullshit machine, and that much is new.
At least the good part of the answers being on stack overflow then is, like they used to say "On the internet no one knows you're a dog". So whether the answer came from ChatGPT or an aggressively overconfident fool, a wrong answer should get the same downvotes regardless, and a correct answer should get the same up votes. Probably the two biggest issues with ChatGPT being used to provide answers is whether it's wrong often enough to start swinging the experience of the site negative, and more importantly that some people are getting fake internet points unfairly.
To the extent this perception exists -- and I don't think "came from a computer" falls within the top 5 actually effective methods of laundering bullshit nowadays, though maybe it used to -- you might expect that it gets crushed into dust as the public gets more exposure to high-profile counterexamples.
And, wait, isn't the concern usually that people read AI-generated content and trust it but don't think it came from a computer?
bambax|3 years ago
But ChatGPT is a bullshit machine, and that much is new.
tpmoney|3 years ago
convexfunction|3 years ago
To the extent this perception exists -- and I don't think "came from a computer" falls within the top 5 actually effective methods of laundering bullshit nowadays, though maybe it used to -- you might expect that it gets crushed into dust as the public gets more exposure to high-profile counterexamples.
And, wait, isn't the concern usually that people read AI-generated content and trust it but don't think it came from a computer?
yawnxyz|3 years ago
orange_fritter|3 years ago
hombre_fatal|3 years ago