top | item 40771219

(no title)

kreeben | 1 year ago

Your linked paper suffers from the same anthropomorphisation as does all papers who uses the word "hallucination".

discuss

order

mordechai9000|1 year ago

It seems like a useful adaptation of the term to a new usage, but I can understand if your objection is that it promotes anthropomorphizing these types of models. What do you think we should call this kind output, instead of hallucination?

isidor3|1 year ago

An author at Ars Technica has been trying to push the term "confabulation" for this

Karellen|1 year ago

Maybe another way of looking at it is - the paper is attempting to explain what LLMs are actually doing to people who have already anthropomorphised them.

Sometimes, to lead people out of a wrong belief or worldview, you have to meet them where they currently are first.

fouc|1 year ago

> In this paper, we argue against the view that when ChatGPT and the like produce false claims they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bullshitting, in the Frankfurtian sense (Frankfurt, 2002, 2005). Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.

> We think that this is worth paying attention to. Descriptions of new technology, including metaphorical ones, guide policymakers’ and the public’s understanding of new technology; they also inform applications of the new technology. They tell us what the technology is for and what it can be expected to do. Currently, false statements by ChatGPT and other large language models are described as “hallucinations”, which give policymakers and the public the idea that these systems are misrepresenting the world, and describing what they “see”. We argue that this is an inapt metaphor which will misinform the public, policymakers, and other interested parties.

nerevarthelame|1 year ago

The criticism that people shouldn't anthropomorphize AI models that are deliberately and specifically replicating human behavior is already so tired. I think we need to accept that human traits will no longer be unique to humans (if they ever were, if you expand the analysis to non-human species), and that attributing these emergent traits to non-humans is justified. "Hallucination" may not be the optimal metaphor for LLM falsehoods, but some humans absolutely regularly spout bullshit in the same way that LLMs do - the same sort of inaccurate responses generated from the same loose past associations.

soloist11|1 year ago

People like that are often schizophrenic.