top | item 34478411

(no title)

kirsebaer | 3 years ago

ChatGPT often makes up facts. It outputs stuff that looks like it could have been written by a human, not stuff that is correct.

Don’t use ChatGPT for medical research.

discuss

order

rapsey|3 years ago

These arguments are just like the old days when wikipedia showed up. Don't miss the forest for the trees. ChatGPT is a huge threat to google and a bunch of other industries.

archagon|3 years ago

Not comparable. Wikipedia has always had a strict policy on citing sources. ChatGPT can't cite sources by design, because its answers are based on synthesis.

bognition|3 years ago

Absolutely the case, but also people make up stuff online all the time, so google has this exact same problem.

memen|3 years ago

No. Google gives you the source. ChatGPT does not.

frakt0x90|3 years ago

At least with Google you have sources you can trust more than others whereas ChatGPT is a black box

eric4smith|3 years ago

I Clearly said we use google to confirm the AI output.

And we also do not do medical stuff. I just used that as an example.

Turing_Machine|3 years ago

> Don’t use ChatGPT for medical research.

Or Google. There are plenty of pages out there that (e.g.) claim that Alzheimers is caused by drinking out of aluminum cans, or that the world is controlled by grey aliens from Zeta Reticuli.

dinkumthinkum|3 years ago

… you know Google provides the URL right? With Google it is very easy to tell if the information is coming from NIH or infowars/forums/etc.

tasuki|3 years ago

> ChatGPT often makes up facts.

As opposed to... Google? Your doctor? My doctor?

dinkumthinkum|3 years ago

Absolutely as opposed to those things. With Google, if you use a reliable source like Mayo, NIH, even a WebMD, It is clearly more likely to have accurate information than something that proves even numbers are prime. Certainly all those things can be inaccurate but where in the world you think ChatGPT pattern matches it’s information from?

theGnuMe|3 years ago

Isn't there a medical chat-gpt that passed the medical licensing exam? I thought I saw that come up..

abnry|3 years ago

Let's say ChatGPT gives you false information 50% of the time. It is still useful.

Just like it is harder to find primes numbers than verify that a number is prime, it is harder to dig up potential tidbits of information than to verify a piece of information handed to you is true.

dinkumthinkum|3 years ago

50% is still useful? A broken watch is useful in that sense as well I guess. I can only see that has useful if you don’t include efficient in the definition of useful.

brink|3 years ago

Your prime number analogy doesn't hold water because the average person doesn't verify. Being wrong half the time has potential for serious damage.