(no title)
uSoldering | 1 year ago
https://web.archive.org/web/20200807133049/https://www.wisco...
The (edited) cheese ad: https://www.youtube.com/watch?v=I18TD4GON8g
What probably should be the target link: https://www.theverge.com/news/608188/google-fake-gemini-ai-o...
BugsJustFindMe|1 year ago
The article literally says it's not a hallucination and that the detail came from real websites.
"Google executive Jerry Dischler said this was not a “hallucination” – where AI systems invent untrue information – but rather a reflection of the fact the untrue information is contained in the websites that Gemini scrapes..."
gusfoo|1 year ago
The "hallucination" term generally refers to any made-up facts. Harsh as it may be to put this weight of responsibility on LLMs, users of LLMs generally use them in the expectation that what is says is true, and has been (in some magic hand-wavy way) cross-checked or confirmed as factual. Instead they will print out what is most likely to follow the user's input, based on the training data.
Unfortunately, a vast amount of that vast corpus of training data is social media posts which can't be relied upon to be true. But if it gets repeated a lot then it's treated as true in the sense that "what does 'salary' mean" is generally followed by a billion social media posts saying "it referred to the time that Romans soldiers were paid in salt, because salt was a currency at the time"
NicuCalcea|1 year ago
The article says that's what a Google exec claims, not that that's the actual case. They haven't pointed to any of those websites and we don't have to take them at their word.
Someone further down pointed to a source on cheese.com, where it says gouda makes up 50% to 60% of all global consumption of Dutch cheese. If the source is accurate, the AI hallucinated an incorrect response.