I keep seeing people talking about chatgpt hallucinating when it's wrong, but not when it's right. Maybe I've misunderstood, but isn't it just always hallucinating? It's not like the failure-mode is meaningfully different from the successes, except insofar as whether we agree with it, right?
ggm|2 years ago
Yes, colloquially what it does is hallucinate all the time, and sometimes it lucid. But more factually no, it doesn't hallucinate because there is no "it" there, it's not conscious and you need to have a brain, to hallucinate.
There's no "there" there.
That is the whole of my point: we're using the wrong labels to describe what is happening.
When it comes to explaining and describing "it's like" is one of the WORST ways to go. explanation by analogy or metaphor is a trap. "atoms are like billiard balls BZZZT next" "cells are little bags of water BZZZT next" "panadol 'kills' the pain BZZT no, it doesn't kill anything next"
pulvinar|2 years ago
I predict acceptance will go the way the ether disappeared-- advancing one funeral at a time.
starfallg|2 years ago
ggm|2 years ago
This coining terms of art thing isn't uncommon. Think "brutalist architecture" and remind yourself its "en brute" == raw from the french. It has nothing to do with how "brutal" people think concrete is.
SanderNL|2 years ago
aezart|2 years ago