This is a sort of interesting point, it's true that knowingly-metaphorical anthropomorphisation is hard to distinguish from genuine anthropomorphisation with them and that's food for thought, but the actual situation here just isn't applicable to it. This is a very specific mistaken conception that people make all the time. The OP explicitly thought that the model would know why it did the wrong thing, or at least followed a strategy adjacent to that misunderstanding. He was surprised that adding extra slop to the prompt was no more effective than telling it what to do himself. It's not a figure of speech.
zarzavat|1 month ago
> No one gets in trouble for saying that 2 + 2 is 5, or that people in Pittsburgh are ten feet tall. Such obviously false statements might be treated as jokes, or at worst as evidence of insanity, but they are not likely to make anyone mad. The statements that make people mad are the ones they worry might be believed. I suspect the statements that make people maddest are those they worry might be true.
People are upset when AIs are anthropomorphized because they feel threatened by the idea that they might actually be intelligent.
Hence the woefully insufficient descriptions of AIs such as "next token predictors" which are about as fitting as describing Terry Tao as an advanced gastrointestinal processor.
jdub|1 month ago
I'm threatened by other people wrongly believing that LLMs possess elements of intelligence that they simply do not.
Anthropomorphosis of LLMs is easy, seductive, and wrong. And therefore dangerous.
antonvs|1 month ago
phpnode|1 month ago
nonethewiser|1 month ago
I am speaking general terms - not just this conversation here. The only specific figure of speech I see in the original comment is "self reflection" which doesn't seem to be in question here.
electroglyph|1 month ago
lukashahnart|1 month ago