(no title)
munro
|
3 months ago
Amazing, some people are so enamored with LLMs who use them for soft outcomes, and disagree with me when I say be careful they're not perfect -- this is such a great non technical way to explain the reality I'm seeing when using on hard outcome coding/logic tasks. "Hey this test is failing", LLM deletes test, "FIXED!"
derbOac|3 months ago
What about when we don't know what it's supposed to look like?
Lately I've been wrestling with the fact that unlike, say, a generalized linear model fit to data with some inferential theory, we don't have a theory or model for the uncertainty about LLM products. We recognize when it's off about things we know are off, but don't have a way to estimate when it's off other than to check it against reality, which is probably the exception to how it's used rather than the rule.
ehnto|3 months ago
It's why non-coders think it's doing an amazing job at software.
But it's worryingly why using it for research, where you necessarily don't know what you don't know, is going to trip up even smarter people.
munro|3 months ago
My intuition is at the start when I was like "choose one of these 10 or unknown", that unknown left a big gray area, so as I added more classes the model could say "I know it's not X, because it's more similar to Y"
I feel like in this case though, the broken clocks are broken because they don't serve the purpose of visually transmitting information, they do look like clocks tho. I'm sure if you fed the output back into the LLM and ask what time it is it would say IDK, or more likely make something up and be wrong. (at least the egregious ones where the hands are flying everywhere)
worldsayshi|3 months ago
palmotea|3 months ago
I disagree, those tasks are perfect for LLMs, since a bug you can't verify isn't a problem when vibecoding.
mopsi|3 months ago
markatkinson|3 months ago