(no title)
nicklecompte | 1 year ago
A dictionary or a reputable Wikipedia entry or whatever is ultimately full of human-edited text where, presuming good faith, the text is written according to that human's rational understanding, and humans are capable of justified true belief. This is not the case at all with an LLM; the text is entirely generated by an entity which is not capable of having justified true beliefs in the same way that humans and rats have justified true beliefs. That is why text from an LLM is more suspect than text from a dictionary.
eynsham|1 year ago
The point you make might be regarded as an argument about the first question. In each case, the ‘chain of custody’ (as the parent comment put it) is compared and some condition is proposed. The condition explicitly considered in the first question was reliability; it was suggested that reliability is not enough, because it isn’t justification (which we can understand pretheoretically, ignoring the post-Gettier literature). My point was that we can’t circumvent the post-Gettier literature because at least one seemingly plausible view of justification is just reliability, and so that needs to be rejected Gettier-style (see e.g. BonJour on clairvoyance). The condition one might read into your point here is something like: if in the ‘chain of custody’ some text is generated by something that is incapable of belief, the text at the end of the chain loses some sort of epistemic virtue (for example, beliefs acquired on reading it may not amount to knowledge). Thus,
> text from an LLM is more suspect than text from a dictionary.
I am not sure that this is right. If I have a computer generate a proof of a proposition, I know the proposition thereby proved, even though ‘the text is entirely generated by an entity which is not capable of having justified true beliefs’ (or, arguably, beliefs at all). Or, even more prosaically, if I give a computer a list of capital cities, and then write a simple program to take the name of a country and output e.g. ‘[t]he capital of France is Paris’, the computer generates the text and is incapable of belief, but, in many circumstances, it is plausible to think that one thereby comes to know the fact output.
I don’t think that that is a reductio of the point about LLMs, because the output of LLMs is different from the output of, for example, an algorithm that searches for a formally verified proof, and the mechanisms by which it is generated also are.