(no title)
rewilder12 | 3 months ago
Big tech created a problem for themselves by allowing people to believe the things their products generate using LLMs are facts.
We are only reaching the obvious conclusion of where this leads.
rewilder12 | 3 months ago
Big tech created a problem for themselves by allowing people to believe the things their products generate using LLMs are facts.
We are only reaching the obvious conclusion of where this leads.
kentm|3 months ago
I always thought that was a correct and useful observation.
hnuser123456|3 months ago
On the other hand, training a small model to hallucinate less would be a significant development. Perhaps with post-training fine-tuning, after getting a sense of what depth of factual knowledge the model has actually absorbed, adding a chunk of training samples with a question that goes beyond the model's fact knowledge limitations, and the model responding "Sorry, I'm a small language model and that question is out of my depth." I know we all hate refusals but surely there's room to improve them.
th0ma5|3 months ago