Maybe that's a bias from training data. I would assume that most documents skip the "clarifying the question/scope" part of reasoning. Imagine a scientific text or even a book. Most will start with a clear context/scope. Either with a thesis or a well defined question or (in case of a book) with a story. Texts that start with a question that first needs to be refined are probably rare.
I wonder if anyone has any research on this field. I've often seen this myself (too often) where LLMs make assumptions and run off with the wrong thing.
"This is how you do <absolutely unrelated thing>" or "This is why <thing that actually exists already> is impossible!". Ffs man, just ask for info! A human wouldn't need to - they'd get the context - but LLMs apparently don't?
micw|7 days ago
user_7832|7 days ago
"This is how you do <absolutely unrelated thing>" or "This is why <thing that actually exists already> is impossible!". Ffs man, just ask for info! A human wouldn't need to - they'd get the context - but LLMs apparently don't?
magackame|7 days ago