You cannot approximate NP-complete functions. If you could approximate them with a practically useful limited error and at most P effort you would have solved P=NP. (disclaimer my computer science classes have been a long time ago)
This isn't correct. What you may be remembering is that some (not all) NP complete problems have limits on how accurately they can be approximated (unless P = NP). But approximation algorithms for NP complete problems form a whole subfield of CS.
Perhaps I'm not using the vocabulary correctly here.
What I mean is, if you ask a human to solve a travelling salesman problem and they find it too hard to solve exactly, they will still be able to come up with a better than average solution. This is what I called approximation (but maybe this is incorrect?).
Hallucination would be to choose a random solution and claim that it's the optimum.
I may be misunderstanding the way LLM practitioners use the word “hallucination,” but I understood it to describe it as something different from the kind of “random” nonsense-word failures that happen, for example, when the temperature is too high [0].
Rather, I thought hallucination, in your example, might be something closer to a grizzled old salesman-map-draftsman’s folk wisdom that sounds like a plausibly optimal mapping strategy to a boss oblivious to the mathematical irreducibility of the problem. Imagining a “fact” that sounds plausible and is rhetorically useful, but that’s never been true and nobody ever said was true.
It’ll still be, like your human in the example, better than average (if “average” means averaged across the universe of all possible answers), and maybe even useful enough to convince the people reading the output, but it will be nonetheless false.
kalkin|2 years ago
moyix|2 years ago
fauigerzigerk|2 years ago
What I mean is, if you ask a human to solve a travelling salesman problem and they find it too hard to solve exactly, they will still be able to come up with a better than average solution. This is what I called approximation (but maybe this is incorrect?).
Hallucination would be to choose a random solution and claim that it's the optimum.
alwa|2 years ago
Rather, I thought hallucination, in your example, might be something closer to a grizzled old salesman-map-draftsman’s folk wisdom that sounds like a plausibly optimal mapping strategy to a boss oblivious to the mathematical irreducibility of the problem. Imagining a “fact” that sounds plausible and is rhetorically useful, but that’s never been true and nobody ever said was true.
It’ll still be, like your human in the example, better than average (if “average” means averaged across the universe of all possible answers), and maybe even useful enough to convince the people reading the output, but it will be nonetheless false.
[0] e.g. https://news.ycombinator.com/item?id=39450669