top | item 38376909

(no title)

denlekke | 2 years ago

they definitely do but i think in podcasts i generally have a better ability to evaluate how much trust i should place in what i'm hearing. i know if i'm listening to a professional chef podcast, i can probably trust they will generally be right talking about how they like to bake a turkey. if i'm listening to the hot wings guy interview zac efron i know to be less blindly trusting of their info on astrophysics.

with chatgpt i don't know it's "experience" or "education" on a topic and it has no social accountability motivating it to make sure it gets things right so i can't estimate how much i should trust it in the same way.

discuss

order

WendyTheWillow|2 years ago

I guarantee ChatGPT won't lie to you about how to bake a turkey.

The paranoia around hallucination is wildly overblown, especially given the low stakes of this context.

taway1237|2 years ago

I don't know how people use ChatGPT at all. It confidently hallucinated answers to 4 out of 5 my latest "real" questions, with code examples and everything. Fortunately with code I could easily verify the provided solutions are worthless. Granted I was asking questions about my niche that were hard enough I couldn't easily Google them or find a solution myself, but I think that's the bar for being useful. The only thing it got right was finding a marketing slogan.

bigDinosaur|2 years ago

It has lied to me about extremely basic things. It abssolutely is worth being paranoid about. You simply cannot blindly trust it.

denlekke|2 years ago

i appreciate your confidence and would love to know how far you would go with the guarantee ! it makes me realize that there is at least one avenue for some level of trust about gpt accuracy and that's my general awareness of how much written content on the topic it probably had access to during training.

i think maybe your earlier comment was about the average trustworthiness of all podcasts vs the same for all gpt responses. i would probably side with gpt4 in that context.

however, there are plenty of situations where the comparison is between a podcast from the best human in the world at something and gpt which might have less training data or maybe the risks for the topic aren't eating an uncooked turkey but learning cpr wrong or having an airbag not deploy