top | item 41028253

(no title)

glenndebacker | 1 year ago

I've noticed that when using a language model for rephrasing text it also sometimes seem to miss important details because it clearly has no real understanding of the text.

It's not a problem when you are aware of it and with some follow up input you can get it mitigated, but often I see that people tend to take the first output of these systems at face value. People should be a bit more critical in that regards.

discuss

order

Bumblonono|1 year ago

I really don't wanna be nitpicky, but what do you mean by 'no real understanding of the text'?

How do you benchmark something or someone understanding text?

I'm asking because the magic of LLM is the meta level which basically creates a mathematical representation of meaning and most of the time, when i write with an LLM, it feels very understanding to me.

Missing details is shitty and annoying but i have talked to humans and plenty of them do the same thing but actually worse.

xanderlewis|1 year ago

If you ask it basic mathematical questions, it becomes quickly clear that the ‘understanding’ it seems to possess is a mirage. The illusion is shattered after a few prompts. To use your comparison with humans: if any human said such naive and utterly wrong things we’d assume they were either heavily intoxicated or just had no understanding of what they’re talking about and are simply bluffing.

I guess at best you can say these models have an ‘understanding’ of language, but their ability to waffle endlessly and eruditely about any well-known topic you can throw at it is just further evidence of this — not that it understands the content.