(no title)
IlliOnato | 1 year ago
For example, try to ask (better in Russian), how many letters "а" are there in Russian word "банан". It seems all models answer with "3". Playing with it reveals that apparently LLMs confuse Russian "банан" with English "banana" (same meaning). Trying to get LLMs to produce a correct answer results is some hilarity.
I wonder if each "failure" of this kind deserves an academic article, though. Well, perhaps it does, when different models exhibit the same behaviour...
alfiopuglisi|1 year ago
LLMs are a tool, and like any other tool, they have strengths and weaknesses. Know your tools.
IlliOnato|1 year ago
I mean, you can ask an LLM to count letters in thousand of words, and pretty much always it will come with the correct answer! So far I don't know of any word other than "банан" that breaks this function.