(no title)
shitloadofbooks | 2 years ago
LLMs (Current-generation and UI/UX ones at least) will tell you all sorts of incorrect "facts" just because "these words go next to each other lots" with a great amount of gusto and implied authority.
shitloadofbooks | 2 years ago
LLMs (Current-generation and UI/UX ones at least) will tell you all sorts of incorrect "facts" just because "these words go next to each other lots" with a great amount of gusto and implied authority.
Supply5411|2 years ago
lxgr|2 years ago
Maybe they’re just orders of magnitude more useful at the beginning of a career, when it’s more important to digest and distill readily-available information than to come up with original solutions to edge cases or solve gnarly puzzles?
Maybe I also simply don’t write enough code anymore :)
unknown|2 years ago
[deleted]
__loam|2 years ago
pests|2 years ago
lxgr|2 years ago
But sometimes we're more than that: Some types of deep understanding aren't verbal or language-based, and I suspect that these are the ones that LLMs will have the hardest time getting good at. That's not to say that no AI will get there at all, but I think it'll need something fundamentally different from LLMs.
For what it's worth, I've personally changed my mind here: I used to think that the level of language proficiency that LLMs demonstrate easily would only be possible using an AGI. Apparently that's not the case.
barrysteve|2 years ago
We can generate thoughts that are spatially coherent, time aware, validated for correctness and a whole bunch of other qualities that LLMs cannot do.
Why would LLMs be the model for human thought, when it does not come close to the thoughts humans can do every minute of every day?
Aren't we all just stochastic parrots, is the kind of question that requires answering an awful lot about the universe before you get to an answer.
skydhash|2 years ago
__loam|2 years ago