Are you sure?
I've been confidently wrong about stuff before. Embarrassing, but it happens..
And I've been working with many people who are sometimes wrong about stuff too.
With LLMs you call that "hallucinating" and with people we just call it "lapse in memory", "error in judgment", or "being distracted", or plain "a mistake".
fainpul|3 months ago
LLMs are always super confident and tell you how it is. Period. You would soon stop asking a coworker who repeatedly behaved like that.
illuminator83|3 months ago
But I think the point of the article is that you should have measure in place which make hallucinations not matter because it will be noticed in CI and tests.
whobre|3 months ago
illuminator83|3 months ago
But I remember a few people long ago telling me confidently how to do this or that in e.g. "git" only to find out during testing that it didn't quite work like that. Or telling me about how some subsystem could be tested. When it didn't work like that at all. Because they operated from memory instead of checking. Or confused one tool/system for another.
LLMs can and should verify their assumptions too. The blog article is about that. That should keep most hallucinations and mistakes people make from doing any real harm.
If you let an LLM do that it won't be much of a problem either. I usually link an LLM to an online source for an API I want to use or tell it just look it up so it is less likely to make such mistakes. It helps.