(no title)
oo0shiny | 6 months ago
What a great way of framing it. I've been trying to explain this to people, but this is a succinct version of what I was stumbling to convey.
oo0shiny | 6 months ago
What a great way of framing it. I've been trying to explain this to people, but this is a succinct version of what I was stumbling to convey.
jstrieb|6 months ago
https://jstrieb.github.io/posts/llm-thespians/
red75prime|6 months ago
ngc248|6 months ago
lagrange77|6 months ago
bo1024|6 months ago
[1] https://en.wikipedia.org/wiki/On_Bullshit
aitchnyu|6 months ago
lagrange77|6 months ago
But i think this 'being wrong' is kind of confusing when talking about LLMs (in contrast to systems/scientific modelling). In what they model (language), the current LLMs are really good and acurate, except for example the occasional chinese character in the middle of a sentence.
But what we mean by LLMs 'being wrong' most of the time is being factually wrong in answering a question, that is expressed as language. That's a layer on top of what the model is designed to model.
EDITS:
So saying 'the model is wrong' when it's factually wrong above the language level isn't fair.
I guess this is essentially the same thought as 'all they do is hallucinate'.
pjmorris|6 months ago
mohsen1|6 months ago
unknown|6 months ago
[deleted]
tugberkk|6 months ago
ljm|6 months ago
Because it seems the point being made multiple times that a perceptual error isn’t a key component of hallucinating, the whole thing is instead just a convincing illusion that could theoretically apply to all perception, not just the psychoactively augmented kind.
OtomotO|6 months ago
E.g. Programming in JS or Python: good enough
Programming in Rust: I can scrap over 50% of the code because it will
a) not compile at all (I see this while the "AI" types)
b) not meet the requirements at all