Intractable model error that's elemental to the approach won't get you any funding though.
Anthropomorphizing statistical learning is how you build a hype machine to cash out people with zero handle on the subject. See the comment below about "AI judges" and "true justice". Just like early electricity, all people see is magic.
No, it's good that the public understands that AIs are wrong so regularly that we need a special word dedicated to this one specific manner in which they're wrong.
Generative AI output is becoming inextricably associated with this word, and that's not a bad thing to keep people aware of.
> No, it's good that the public understands that AIs are wrong so regularly
_compared to what_, exactly. Compared to a google search? Compared to asking a random person? Compared to wikpedia? New York Times journalists?
Any of those things are wrong _very_ frequently. It's such an uninteresting thing to call out every time an AI is wrong, when it is right about things so frequently that people don't bother to notice how amazing it is that it gets anything correct about the world at all.
Any recommendations? The public seems to actually understand what this means although it’s just more anthropomorphization of a random bullshit generator.
How about we stop with all the Nonsense around calling it "temperature" like it's a sick baby and call it RAND cause that's what it is.
The PT Barnum levels of bullshit around ML (see we have a term that isnt using artificial or intelligence) has gotten old. Sam Altman is the next Elizabeth Holmes.
I forget where I originally heard this idea, but I always explain to people that LLMs are (affectionately) "bullshitters." Terms like "lying" or "hallucinating" imply that it's trying to tell the truth, but actually it doesn't care if what it says is true or not at all save for the fact that true text is slightly more plausible than false text.
Instead of ‘hallucinations’, try ‘samplings from the model that happen not to be sufficiently reminiscent of reality’. Of course, it’s a little bit less catchy. But that’s the problem with catchiness — it sticks regardless of its truth.
The fact that ‘correct’ outputs are treated as if they’re the product of an in-any-way-different process to the ‘hallucinated’ ones is the problem.
pxmpxm|2 years ago
Anthropomorphizing statistical learning is how you build a hype machine to cash out people with zero handle on the subject. See the comment below about "AI judges" and "true justice". Just like early electricity, all people see is magic.
add-sub-mul-div|2 years ago
Generative AI output is becoming inextricably associated with this word, and that's not a bad thing to keep people aware of.
lupire|2 years ago
empath-nirvana|2 years ago
_compared to what_, exactly. Compared to a google search? Compared to asking a random person? Compared to wikpedia? New York Times journalists?
Any of those things are wrong _very_ frequently. It's such an uninteresting thing to call out every time an AI is wrong, when it is right about things so frequently that people don't bother to notice how amazing it is that it gets anything correct about the world at all.
linkjuice4all|2 years ago
zer00eyz|2 years ago
Bugs, Defects and "not fit for production".
How about we stop with all the Nonsense around calling it "temperature" like it's a sick baby and call it RAND cause that's what it is.
The PT Barnum levels of bullshit around ML (see we have a term that isnt using artificial or intelligence) has gotten old. Sam Altman is the next Elizabeth Holmes.
</rant>
lupire|2 years ago
pimanrules|2 years ago
xanderlewis|2 years ago
The fact that ‘correct’ outputs are treated as if they’re the product of an in-any-way-different process to the ‘hallucinated’ ones is the problem.
roblabla|2 years ago
unknown|2 years ago
[deleted]
neilv|2 years ago
"Wargames" (1983): https://www.youtube.com/watch?v=71k7-dGhNFQ&t=4m8s