> People constantly assert that LLMs don't think in some magic way that humans do think,
It doesn't matter anyway. The marquee sign reads "Artificial Intelligence" not "Artificial Human Being". As long as AI displays intelligent behavior, it's "intelligent" in the relevant context. There's no basis for demanding that the mechanism be the same as what humans do.
And of course it should go without saying that Artificial Intelligence exists on a continuum (just like human intelligence as far as that goes) and that we're not "there yet" as far as reaching the extreme high end of the continuum.
I recently saw an article about LLMs and Towers of Hanoi. An LLM can write code to solve it. It can also output steps to solve it when the disk count is low like 3. It can’t give the steps when the disk count is higher. This indicates LLMs inability to reason and understand. Also see Gotham Chess and the Chatbot Championship. The Chatbots start off making good moves, but then quickly transition to making illegal moves and generally playing unbelievably poorly. They don’t understand the rules or strategy or anything.
It's not some "magical way"--the ways in which a human thinks that an LLM doesn't are pretty obvious, and I dare say self-evidently part of what we think constitutes human intelligence:
- We have a sense of time (ie, ask an LLM to follow up in 2 minutes)
- We can follow negative instructions ("don't hallucinate, if you don't know the answer, say so")
my favourite game is to try to get them to be more specific - every single time they manage to exclude a whole bunch of people from being "intelligent".
Yes, and the name for this behaviour is called "being scientific".
Imagine a process called A, and, as you say, we've no idea how it works.
Imagine, then, a new process, B, comes along. Some people know a lot about how B works, most people don't. But the people selling B, they continuously tell me it works like process A, and even resort to using various cutesy linguistic tricks to make that feel like it's the case.
The people selling B even go so far as to suggest that if we don't accept a future where B takes over, we won't have a job, no matter what our poor A does.
What's the rational thing to do, for a sceptical, scientific mind? Agree with the company, that process B is of course like process A, when we - as you say yourself - don't understand process A in any comprehensive way at all? Or would that be utterly nonsensical?
When I write a sentence, I do it with intent, with specific purpose in mind. When an "AI" does it, it's predicting the next word that might satisfy the input requirement. It doesn't care if the sentence it writes makes any sense, is factual, etc, so long as it is human readable and follows gramatic rules. It does not do this with any specific intent, which is why you get slop and just plain wrong output a fair amount of time. Just because it produces something that sounds correct sometimes does not mean it's doing any thinking at all. Yes, humans do actually think before they speak, LLMs do not, cannot, and will not because that is not what they are designed to do.
chpatrick|5 months ago
People constantly assert that LLMs don't think in some magic way that humans do think, when we don't even have any idea how that works.
mindcrime|5 months ago
It doesn't matter anyway. The marquee sign reads "Artificial Intelligence" not "Artificial Human Being". As long as AI displays intelligent behavior, it's "intelligent" in the relevant context. There's no basis for demanding that the mechanism be the same as what humans do.
And of course it should go without saying that Artificial Intelligence exists on a continuum (just like human intelligence as far as that goes) and that we're not "there yet" as far as reaching the extreme high end of the continuum.
jbritton|5 months ago
elbasti|5 months ago
- We have a sense of time (ie, ask an LLM to follow up in 2 minutes)
- We can follow negative instructions ("don't hallucinate, if you don't know the answer, say so")
d3ckard|5 months ago
The proof burden is on AI proponents.
exe34|5 months ago
lordhumphrey|5 months ago
Imagine a process called A, and, as you say, we've no idea how it works.
Imagine, then, a new process, B, comes along. Some people know a lot about how B works, most people don't. But the people selling B, they continuously tell me it works like process A, and even resort to using various cutesy linguistic tricks to make that feel like it's the case.
The people selling B even go so far as to suggest that if we don't accept a future where B takes over, we won't have a job, no matter what our poor A does.
What's the rational thing to do, for a sceptical, scientific mind? Agree with the company, that process B is of course like process A, when we - as you say yourself - don't understand process A in any comprehensive way at all? Or would that be utterly nonsensical?
mvdtnz|5 months ago
leptons|5 months ago