(no title)
hoosieree | 1 year ago
I suppose depending on your point of view, LLMs either can't hallucinate, or that's all they can do.
hoosieree | 1 year ago
I suppose depending on your point of view, LLMs either can't hallucinate, or that's all they can do.
ToValueFunfetti|1 year ago
Empirically, this cannot be true. If it were, it would be statistically shocking how often models coincidentally say true things. The training does not perfectly align the model with truth, but 'orthogonal' is off by a minimum of 45 degrees.
viraptor|1 year ago
> The training does not perfectly align the model with truth, but 'orthogonal'
Nitpicky, but the more dimensions you have, the easier it is for almost everything to be orthogonal. (https://softwaredoug.com/blog/2022/12/26/surpries-at-hi-dime...) That's why averaging embeddings works.
CooCooCaCha|1 year ago
Why do you care so much about this particular issue? And why can’t hallucination be something we can aim to improve?