(no title)
glenndebacker | 1 year ago
It's not a problem when you are aware of it and with some follow up input you can get it mitigated, but often I see that people tend to take the first output of these systems at face value. People should be a bit more critical in that regards.
Bumblonono|1 year ago
How do you benchmark something or someone understanding text?
I'm asking because the magic of LLM is the meta level which basically creates a mathematical representation of meaning and most of the time, when i write with an LLM, it feels very understanding to me.
Missing details is shitty and annoying but i have talked to humans and plenty of them do the same thing but actually worse.
xanderlewis|1 year ago
I guess at best you can say these models have an ‘understanding’ of language, but their ability to waffle endlessly and eruditely about any well-known topic you can throw at it is just further evidence of this — not that it understands the content.