(no title)
vrotaru | 7 months ago
I've started to think of LLM's as a form lossy compression of available knowledge which when prompted produces "facts".
vrotaru | 7 months ago
I've started to think of LLM's as a form lossy compression of available knowledge which when prompted produces "facts".
devmor|7 months ago
That is almost exactly what they are and what you should treat them as.
A lossy compressed corpus of publicly available information with a weight of randomness. The most fervent skeptics like to call LLMs "autocorrect on steroids" and they are not really wrong.
uh_uh|7 months ago
vbezhenar|7 months ago
I think that's the right direction for modern AI to move. ChatGPT uses Google searches often. So replace Google with curated knowledge database, train LLM to consult this database for every fact and hallucinations will be gone.