top | item 40537543

(no title)

Last5Digits | 1 year ago

No, the answers aren't just "plausible", they are correct the vast majority of the time. You can try this for yourself or look at any benchmark, leaderboard or even just listen to the millions of people using them every day. I fact check constantly when I use any LLM, and I can attest to you that I don't just believe that the answers I'm getting are correct, but that they actually are just that.

But they apparently actually don't get better even though every metric tells us they do, because they can't? How about making an actual argument? Why is correctness "not a property of LLMs"? Do you have a point here that I'm missing? Whether or not Kahneman thinks that there are two different systems of thinking in the human mind has absolutely no relevance here. Factualness isn't some magical circuit in the brain.

> No such thing can exist.

In the same way there can exist no piece of clothing, piece of tech, piece of furniture, book, toothpick or paperclip that is environmentally friendly; yes. In any common usage, "environmentally friendly" simply means reduced impact, which is absolutely possible with LLMs, as is demonstrated by bigger models being distilled into smaller more efficient ones.

Discussing the environmental impact of LLMs has always been silly, given that we regularly blow more CO2 into the atmosphere to produce and render the newest Avengers movie or to spend one week in some marginally more comfortable climate.

discuss

order

chx|1 year ago

No, they are not correct -- the answer it gives might accidentally be correct but it can not be trusted, you still need to do research to verify everything it says and so the only usable standpoint is to use it as a bullshit generator which it is very good at.

Last5Digits|1 year ago

What's your definition of "correct" then? If a system is "accidentally correct" the majority of the time, when does it stop becoming "accidental"? You cannot trust any system in the way you want to define trust. No human, no computer, no thing in the universe is always correct. There is always a threshold.

I do research with LLMs all the time and I trust them, to a degree. Just like I trust any source and any human, to a degree. Just like I trust the output of any computer, to a degree. I don't need to verify everything they say, at all, in any way.

Genuine question, how do you think an LLM can generate "bullshit", exactly? How can it be that the system, when it doesn't know something, can output something that seems plausible? Can you explain to me how any system could do such a thing without a conception of reality and truth? Why wouldn't it just make something up that's completely removed from reality, and very obviously so, if it didn't have that?