(no title)
jaen | 8 hours ago
If you train an LLM on mostly false statements, it will generate both known and novel falsehoods. Same for truth.
An LLM has no intrinsic concept of true or false, everything is a function of the training set. It just generates statements similar to what it has seen and higher-dimensional analogies of those .
No comments yet.