LLMs have no knowledge (really “knowledge-like weights and biases”) outside their training set and system prompt. That plausible justification is just that — a bunch of words that make sense when strung together. Whether you’d like to give that any epistemic weight is up to you.
eschaton|2 months ago