top | item 37562625

(no title)

shitloadofbooks | 2 years ago

Asking a statistics engine for knowledge is so unfathomable to me that it makes me physically uncomfortable. Your hyperbolic and relentless praise for a stochastic parrot or a "sentence written like a choose your own adventure by an RNG" seems unbelievably misplaced.

LLMs (Current-generation and UI/UX ones at least) will tell you all sorts of incorrect "facts" just because "these words go next to each other lots" with a great amount of gusto and implied authority.

discuss

order

Supply5411|2 years ago

My mind is blown that someone gets so little value out of an LLM. I get over software engineering stumbling blocks much faster by interrogating an LLM's knowledge about the subject. How do you explain that added value? Are you skeptical that I am actually moving and producing things faster?

lxgr|2 years ago

My mind is also blown by how much people seemingly get out of them.

Maybe they’re just orders of magnitude more useful at the beginning of a career, when it’s more important to digest and distill readily-available information than to come up with original solutions to edge cases or solve gnarly puzzles?

Maybe I also simply don’t write enough code anymore :)

__loam|2 years ago

This happened to me looking up am obscure c library. It just confidently made up a function that didn't actually exist in the library. It got me unstuck but you can really fuck yourself if you trust it blindly.

pests|2 years ago

I agree with you but at what point does it change? Aren’t we all just stochastic parrots? How do we ourselves choose the next word in a sentence?

lxgr|2 years ago

In my view, one big learning from LLMs is that yes, more often than not we are just stochastic parrots. And more often than not that's enough!

But sometimes we're more than that: Some types of deep understanding aren't verbal or language-based, and I suspect that these are the ones that LLMs will have the hardest time getting good at. That's not to say that no AI will get there at all, but I think it'll need something fundamentally different from LLMs.

For what it's worth, I've personally changed my mind here: I used to think that the level of language proficiency that LLMs demonstrate easily would only be possible using an AGI. Apparently that's not the case.

barrysteve|2 years ago

If you wish to make an apple pie, first you must make the universe from scratch. (carl sagan)

We can generate thoughts that are spatially coherent, time aware, validated for correctness and a whole bunch of other qualities that LLMs cannot do.

Why would LLMs be the model for human thought, when it does not come close to the thoughts humans can do every minute of every day?

Aren't we all just stochastic parrots, is the kind of question that requires answering an awful lot about the universe before you get to an answer.

skydhash|2 years ago

We use languages to express ideas. Sentences are always subordinate to the ideas. It's very obvious when you try to communicate in another language you're not fluent in. You have the thought, but you can't find the words. The same thing happens when writing code, taking ideas from the business domain and translating it into code.

__loam|2 years ago

God dammit please stop comparing these things to brains. Stop it. It's not even close.