top | item 39491812

(no title)

m_eiman | 2 years ago

Can we stop call AIs giving incorrect information ”hallucinations”, please? It’s just a clever PR stunt to sweep the glaring issues under a carpet.

discuss

order

pxmpxm|2 years ago

Intractable model error that's elemental to the approach won't get you any funding though.

Anthropomorphizing statistical learning is how you build a hype machine to cash out people with zero handle on the subject. See the comment below about "AI judges" and "true justice". Just like early electricity, all people see is magic.

add-sub-mul-div|2 years ago

No, it's good that the public understands that AIs are wrong so regularly that we need a special word dedicated to this one specific manner in which they're wrong.

Generative AI output is becoming inextricably associated with this word, and that's not a bad thing to keep people aware of.

lupire|2 years ago

There should be a special word for the rare occasion when the LLM generates truth.

empath-nirvana|2 years ago

> No, it's good that the public understands that AIs are wrong so regularly

_compared to what_, exactly. Compared to a google search? Compared to asking a random person? Compared to wikpedia? New York Times journalists?

Any of those things are wrong _very_ frequently. It's such an uninteresting thing to call out every time an AI is wrong, when it is right about things so frequently that people don't bother to notice how amazing it is that it gets anything correct about the world at all.

linkjuice4all|2 years ago

Any recommendations? The public seems to actually understand what this means although it’s just more anthropomorphization of a random bullshit generator.

zer00eyz|2 years ago

How about you call them what they are:

Bugs, Defects and "not fit for production".

How about we stop with all the Nonsense around calling it "temperature" like it's a sick baby and call it RAND cause that's what it is.

The PT Barnum levels of bullshit around ML (see we have a term that isnt using artificial or intelligence) has gotten old. Sam Altman is the next Elizabeth Holmes.

</rant>

lupire|2 years ago

LLMs are hallucinating machines. They never not hallucinate. Coincidentally, sometimes they hallucinate something true.

pimanrules|2 years ago

I forget where I originally heard this idea, but I always explain to people that LLMs are (affectionately) "bullshitters." Terms like "lying" or "hallucinating" imply that it's trying to tell the truth, but actually it doesn't care if what it says is true or not at all save for the fact that true text is slightly more plausible than false text.

xanderlewis|2 years ago

Instead of ‘hallucinations’, try ‘samplings from the model that happen not to be sufficiently reminiscent of reality’. Of course, it’s a little bit less catchy. But that’s the problem with catchiness — it sticks regardless of its truth.

The fact that ‘correct’ outputs are treated as if they’re the product of an in-any-way-different process to the ‘hallucinated’ ones is the problem.

roblabla|2 years ago

call it what it is: random bullshit.