top | item 47133101

(no title)

user_7832 | 6 days ago

I wonder if anyone has any research on this field. I've often seen this myself (too often) where LLMs make assumptions and run off with the wrong thing.

"This is how you do <absolutely unrelated thing>" or "This is why <thing that actually exists already> is impossible!". Ffs man, just ask for info! A human wouldn't need to - they'd get the context - but LLMs apparently don't?

discuss

order

magackame|5 days ago

Don't people do this too all the time?