(no title)
Last5Digits | 1 year ago
But they apparently actually don't get better even though every metric tells us they do, because they can't? How about making an actual argument? Why is correctness "not a property of LLMs"? Do you have a point here that I'm missing? Whether or not Kahneman thinks that there are two different systems of thinking in the human mind has absolutely no relevance here. Factualness isn't some magical circuit in the brain.
> No such thing can exist.
In the same way there can exist no piece of clothing, piece of tech, piece of furniture, book, toothpick or paperclip that is environmentally friendly; yes. In any common usage, "environmentally friendly" simply means reduced impact, which is absolutely possible with LLMs, as is demonstrated by bigger models being distilled into smaller more efficient ones.
Discussing the environmental impact of LLMs has always been silly, given that we regularly blow more CO2 into the atmosphere to produce and render the newest Avengers movie or to spend one week in some marginally more comfortable climate.
chx|1 year ago
Last5Digits|1 year ago
I do research with LLMs all the time and I trust them, to a degree. Just like I trust any source and any human, to a degree. Just like I trust the output of any computer, to a degree. I don't need to verify everything they say, at all, in any way.
Genuine question, how do you think an LLM can generate "bullshit", exactly? How can it be that the system, when it doesn't know something, can output something that seems plausible? Can you explain to me how any system could do such a thing without a conception of reality and truth? Why wouldn't it just make something up that's completely removed from reality, and very obviously so, if it didn't have that?