top | item 47132547

(no title)

Nition | 7 days ago

It highlights a general problem with LLMs, that they always jump to answering, whereas humans will often ask clarifying questions first.

discuss

order

micw|7 days ago

Maybe that's a bias from training data. I would assume that most documents skip the "clarifying the question/scope" part of reasoning. Imagine a scientific text or even a book. Most will start with a clear context/scope. Either with a thesis or a well defined question or (in case of a book) with a story. Texts that start with a question that first needs to be refined are probably rare.

user_7832|7 days ago

I wonder if anyone has any research on this field. I've often seen this myself (too often) where LLMs make assumptions and run off with the wrong thing.

"This is how you do <absolutely unrelated thing>" or "This is why <thing that actually exists already> is impossible!". Ffs man, just ask for info! A human wouldn't need to - they'd get the context - but LLMs apparently don't?

magackame|7 days ago

Don't people do this too all the time?