top | item 40540375

(no title)

Last5Digits | 1 year ago

At this point, I strongly urge you to think about what could possibly change your mind. Because if you can't think of anything, then that means that this opinion is not founded on reasoning.

The text LLMs produce is not just plausible in a "looks like human text" sense, as you'd very well know if you actually thought about it. When ChatGPT generates a fake library that looks correct, then the library must seem sensible to fool people. This can't be just a language trick anymore, it must have a similarity to the underlying structure of the problem space to look reasonable.

discuss

order

chx|1 year ago

It indeed lies on very solid reasoning: a probabilistic predictor doesn't deal in facts. You'd need CyC for that.

Last5Digits|1 year ago

The fact that you refuse to engage with my points tells me otherwise.

You're drawing meaningless distinctions, anyone who has ever used Cyc will tell you that it makes massive mistakes and spits out incorrect information all the time.

But that is even true of humans, and every other system you can imagine. Facts aren't these magical things living in your brain, they're information with a high probability of accurately modeling reality.

When someone tells you x happened in y at time z. Then that only becomes a fact if the probability of the source being correct is high enough, that's it. 99% of all of your knowledge is only a fact to you because you extracted it from a source that your heuristics told you is trustworthy enough. There is never absolute certainty, it's all just probability.