Either you care about being correct or you don't. If you don't care then it doesn't matter whether you made it up or the AI did. If you care then you'll fact check before publishing. I don't see why this changes.
When things are easy, you’re going to take the easy path even if it means quality goes down. It’s about trade offs. If you had to do it yourself, perhaps quality would have been higher because you had no other choice.
Lots of kids don’t want to do homework. That said, previously many would because there wasn’t another choice. But now they can just ask ChatGPT for the answers they’ll write that down verbatim with zero learning taking place.
Caring isn’t a binary thing or works in isolation.
Because maybe you want to, but you have a boss breathing down your neck and KPIs to meet and you haven't slept properly in days and just need a win, so you get the AI to put together some impressive looking graphs and stats that will look impressive in that client showcase thats due in a few hours.
Things aren't quite so black and white in reality.
I mean those same conditions already just lead the human to cutting corners and making stuff up themselves. You're describing the problem where bad incentives/conditions lead to sloppy work, that happens with or without AI
Catching errors/validating work is obviously a different process when they're coming from an AI vs a human, but I don't see how it's fundamentally that different here. If the outputs are heavily cited then that might go someway into being able to more easily catch and correct slip-ups
I think a lot about how differentiating facts and quality content is like differentiating signal from noise in electronics. The signal to noise ratio on many online platforms was already quite low. Tools like this will absolutely add more noise, and arguably the nature of the tools themselves make it harder to separate the noise.
I think this is a real problem for these AI tools. If you can’t separate the signal from the noise, it doesn’t provide any real value, like an out of range FM radio station.
It's possible that you care, but the person next to you doesn't, and external pressures force you to keep up with the person who's willing to shovel AI slop. Most of us don't have a complete luxury of the moral high ground at our jobs.
Maybe this would make sense if you saw the whole world as "kids" that you had to protect. As an adult who lives in an adult world, I would like adults to have access to metal tools and not just foam ones.
don't you think the problem of checking for correctness then becomes more insidious then? we now can generate hundreds of reports that look very professional on the surface. the usual things that would tip you off that this person was careless aren't there -- typos, poor sentence construction, missing references. just more noise to pick through for signal
> If you care then you'll fact check before publishing.
Doing a proper fact check is as much work as doing the entire research by hand, and therefore, this system is useless to anyone who cares about the result being correct.
> I don't see why this changes.
And because of the above this system should not exist.
If 20% of people don't care about being correct, the rest of everyone can deal with that. If 80% of people don't care about being correct, the rest of us will not be able to deal with that.
Same thing as misinformation. A sufficient quantitative difference becomes a qualitative difference at some point.
azinman2|1 year ago
Lots of kids don’t want to do homework. That said, previously many would because there wasn’t another choice. But now they can just ask ChatGPT for the answers they’ll write that down verbatim with zero learning taking place.
Caring isn’t a binary thing or works in isolation.
simonw|1 year ago
Sure, but if you're a professional you have to care about your reputation. Presenting hallucinated cases from ChatGPT didn't go very well for that lawyer: https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-...
jstummbillig|1 year ago
financypants|1 year ago
hi_hi|1 year ago
Things aren't quite so black and white in reality.
dauhak|1 year ago
Catching errors/validating work is obviously a different process when they're coming from an AI vs a human, but I don't see how it's fundamentally that different here. If the outputs are heavily cited then that might go someway into being able to more easily catch and correct slip-ups
spaceywilly|1 year ago
I think this is a real problem for these AI tools. If you can’t separate the signal from the noise, it doesn’t provide any real value, like an out of range FM radio station.
WOTERMEON|1 year ago
layer8|1 year ago
RainyDayTmrw|1 year ago
navigate8310|1 year ago
doomroot|1 year ago
n4r9|1 year ago
ctoth|1 year ago
sbarre|1 year ago
Consultants aren't the ones doing the fact-checking, that falls to the client, who ironically tend to assume the consultants did it.
michael_swift|1 year ago
ADeerAppeared|1 year ago
Doing a proper fact check is as much work as doing the entire research by hand, and therefore, this system is useless to anyone who cares about the result being correct.
> I don't see why this changes.
And because of the above this system should not exist.
mlsu|1 year ago
Same thing as misinformation. A sufficient quantitative difference becomes a qualitative difference at some point.