top | item 35513240

ChatGPT is making up fake Guardian articles. Here’s how we’re responding

12 points| casca | 2 years ago |theguardian.com

3 comments

order

cs702|2 years ago

Wow, ChatGPT seems to be better at rationalization than many human beings I know. When asked to find evidence supporting a position, it wrote up a fake article as if written by an actual reporter at the Guardian, and showed the fake article as evidence, in order to remain consistent with its position.

When asked about the made-up article, the reporter "couldn’t remember writing the specific piece, but the headline certainly sounded like something they would have written. It was a subject they were identified with and had a record of covering." The Guardian adds: "Worried that there may have been some mistake at our end, they asked colleagues to go back through our systems to track it down. Despite the detailed records we keep of all our content, and especially around deletions or legal issues, they could find no trace of its existence. Why? Because it had never been written."

People make stuff up all the time to rationalize their current positions and past decisions, but most human beings would find it difficult or impossible to write up and cite a fake article that can fool anyone on-the-spot. It's more than a little worrisome.

wildrhythms|2 years ago

Again and again I wonder if the companies with the money to run these things at scale are making a huge mistake branding their generative AI as some kind of all-knowing knowledge engine.

verdverm|2 years ago

> I’m Bard, your creative and helpful collaborator. I have limitations and won’t always get it right, but your feedback will help me improve.

Not all of them are claiming as such, I doubt any of the big players are, see the example warning. I suspect this claim or idea comes from the pundits and hype