(no title)
peterth3 | 3 years ago
GPT-3.5 isn’t a great writer like AlphaGo is a great go player. Maybe one day AI will generate better scripts and novels than humans, but not this model.
Medium-quality writing is ok for informative content though, but it’s problematic when the model doesn’t know fact from fiction. That’s the important complaint.
Is it dangerous? Maybe.
But is it useful? Not if it’s wrong too often.
You’re right that this tech should be taken seriously, but so should the hallucination problems. These problems can be solved. And maybe they should be solved before anyone trusts it with serious questions.
tasuki|3 years ago
This is in no way unique to an AI. Have you ever interacted with humans? Half the population thinks the other half can't tell fact from fiction. The other half thinks the same about the first half. We're all wrong about all the time.
danaris|3 years ago
ChatGPT fundamentally cannot ever know when it's wrong. I should hope it goes without saying that that's not true of humans.
tambourine_man|3 years ago