guys why does armenian completely break Claude
99 points| ag8 | 1 month ago |twitter.com
https://claude.ai/share/e368b733-71a4-4211-99f5-6b6cc717b575
99 points| ag8 | 1 month ago |twitter.com
https://claude.ai/share/e368b733-71a4-4211-99f5-6b6cc717b575
wnmurphy|1 month ago
Then they changed the architecture so voice mode bypasses custom instructions entirely, which was really unfortunate. I had to unsubscribe, because walking and talking was the killer feature and now it's like you're speaking to a Gen Z influencer or something.
djmips|1 month ago
shimman|1 month ago
unknown|1 month ago
[deleted]
terribleperson|1 month ago
armcat|1 month ago
kachapopopow|1 month ago
art0rz|1 month ago
trjordan|1 month ago
but also, getting shut down for safety reasons seems entirely foreseeable when the initial request is "how do I make a bomb?"
MonkeyClub|1 month ago
andybak|1 month ago
elromulous|1 month ago
I believe fans have provided a retroactive explanation that all our computer tech was based on reverse engineering the crashed alien ship, and thus the arch, and abis etc were compatible.
It's a movie, so whatever, but considering how easily a single project / vendor / chip / anything breaks compatibility, it's a laughable explanation.
Edit: phrasing
layer8|1 month ago
Given that the language of the thought process can be different from the language of conversation, it’s interesting to consider, along the lines of Sapir–Whorf, whether having LLMs think in a different language than English could yield considerably different results, irrespective of conversation language.
(Of course, there is the problem that the training material is predominantly English.)
tobyjsullivan|1 month ago
For example, if I ask for a pasta recipe in Italian, will I get a more authentic recipe than in English?
I’m curious if anyone has done much experimenting with this concept.
Edit: I looked up Sapir-Whorf after writing. That’s not exactly where my theory started. I’m thinking more about vector embedding. I.e., the same content in different languages will end up with slightly different positions in vector space. How significantly might that influence the generated response?
unknown|1 month ago
[deleted]
immibis|1 month ago
specproc|1 month ago
There should be a larger Armenian corpus out there. Do any other languages cause this issue? Translation is a real killer app for LLMs, surprised to see this problem in 2026.
doubleorseven|1 month ago
unknown|1 month ago
[deleted]
mjd|1 month ago
dude250711|1 month ago
shermantanktop|1 month ago
Poudlardo|1 month ago
ai_critic|1 month ago
qubex|1 month ago
immibis|1 month ago
oncallthrow|1 month ago
glorygut123|1 month ago
xeckr|1 month ago