(no title)
Last5Digits | 1 year ago
The text LLMs produce is not just plausible in a "looks like human text" sense, as you'd very well know if you actually thought about it. When ChatGPT generates a fake library that looks correct, then the library must seem sensible to fool people. This can't be just a language trick anymore, it must have a similarity to the underlying structure of the problem space to look reasonable.
chx|1 year ago
Last5Digits|1 year ago
You're drawing meaningless distinctions, anyone who has ever used Cyc will tell you that it makes massive mistakes and spits out incorrect information all the time.
But that is even true of humans, and every other system you can imagine. Facts aren't these magical things living in your brain, they're information with a high probability of accurately modeling reality.
When someone tells you x happened in y at time z. Then that only becomes a fact if the probability of the source being correct is high enough, that's it. 99% of all of your knowledge is only a fact to you because you extracted it from a source that your heuristics told you is trustworthy enough. There is never absolute certainty, it's all just probability.