To be clear he is saying that the LLM is not capable of justified true belief, not commenting on people who believe LLM output. I don’t think your comment is relevant here.
I do think trusting an LLM is less firm ground for knowledge than other ways of learning.
Say I have a model that I know is 98% accurate. And it tells me a fact.
I am now justified in adjusting my priors and weighting the fact quite heavily at .98. But that’s as far as I can get.
If I learned a fact from an online anonymously edited encyclopedia, I might also weight that a 0.98 to start with. But that’s a strictly better case because I can dig more. I can look up the cited sources, look at the edit history, or message the author. I can use that as an entry point to end up with significantly more than 98% conviction.
That’s a pretty important difference with respect to knowledge. It isn’t just about accuracy percentage.
That reading of the comment did occur to me, but I think neither dictionaries nor LLMs are capable of belief, and the comment was about the status of beliefs derived from them.
Okay we are speaking past each other, and you are still misunderstanding the subtlety of the comment:
A dictionary or a reputable Wikipedia entry or whatever is ultimately full of human-edited text where, presuming good faith, the text is written according to that human's rational understanding, and humans are capable of justified true belief. This is not the case at all with an LLM; the text is entirely generated by an entity which is not capable of having justified true beliefs in the same way that humans and rats have justified true beliefs. That is why text from an LLM is more suspect than text from a dictionary.
lukev|1 year ago
Say I have a model that I know is 98% accurate. And it tells me a fact.
I am now justified in adjusting my priors and weighting the fact quite heavily at .98. But that’s as far as I can get.
If I learned a fact from an online anonymously edited encyclopedia, I might also weight that a 0.98 to start with. But that’s a strictly better case because I can dig more. I can look up the cited sources, look at the edit history, or message the author. I can use that as an entry point to end up with significantly more than 98% conviction.
That’s a pretty important difference with respect to knowledge. It isn’t just about accuracy percentage.
eynsham|1 year ago
nicklecompte|1 year ago
A dictionary or a reputable Wikipedia entry or whatever is ultimately full of human-edited text where, presuming good faith, the text is written according to that human's rational understanding, and humans are capable of justified true belief. This is not the case at all with an LLM; the text is entirely generated by an entity which is not capable of having justified true beliefs in the same way that humans and rats have justified true beliefs. That is why text from an LLM is more suspect than text from a dictionary.