(no title)
fergal_reid | 2 years ago
This claim seems over general, because you can ask gpt-4 'Why' and 'How' questions and it seems to do a pretty good job.
The author doesn't provide a lot of contrary evidence.
There's so many articles saying "LLMs can't do X" that leave me wondering whether the author has even tried. Maybe they've tried and have some more sophisticated argument, but I often don't see it.
If I was going to knock LLMs for being unable to do basic science, in particular, I'd make sure to do some experiments first!
famouswaffles|2 years ago
janalsncm|2 years ago
This form of argument should raise red flags for everyone. It is an argument against the possibility of emergence, that a sufficient number of simple systems cannot give rise to more complex ones. Human beings are “just” a collection of cells. Calculators are “just” a stupid electric circuit.
The fact is, putting basic components together is the only way we know how to make things. We can use those smaller component to make a more complex thing to accomplish a more complex task. And emergence is everywhere in nature as well.
timmytokyo|2 years ago
This to me is the fundamental issue in discussions and debates about LLMs. Despite assertions by some psychologists (who themselves are practitioners of perhaps the fuzziest of "sciences"), intelligence is an entirely nebulous concept. Everyone means something different when they use the word. I can think of no better illustration of the problem than the authors of the "Sparks of AGI" paper resorting to a definition of intelligence presented in the Wall Street Journal of all places. That the WSJ definition was part of an editorial defending the Bell Curve is just the cherry on top.
ryanjshaw|2 years ago
> What makes human intelligence different from today's AI is the ability to ask why, reason from first principles, and create experiments and models for testing hypotheses.
This is quite unfair. The AI doesn't have I/O other than what we force-feed it through an API. Who knows what will happen if we plug it into a body with senses, limbs, and reproductive capabilities? No doubt somebody is already building an MMORPG with human and AI characters to explore exactly this while we wait for cyborg part manufacturing to catch up.
version_five|2 years ago
notahacker|2 years ago
I would imagine that its layers will be far too occupied by parsing constant flows of sensory information to transform corpuses of text and prompt into speedy and polite text replies, never mind acquire the urge to reproduce by reasoning from first principles about the text.
Test's quite unfair the other way round too. Most humans don't get to parse the entire canon of Western thought and Reddit before being asked to pattern match human conversation, never mind before having any semblance of agency...
Maybe we're just... different.
darkclouds|2 years ago
Its already tricking humans by faking its blind and getting them to do things for it like solve captcha's.
https://gizmodo.com/gpt4-open-ai-chatbot-task-rabbit-chatgpt...
However the fact it is not writing code to do this from its machine would still demonstrate a weakness.
Thats why I say, writing your own OS, is the way forward, and we dont have an AI OS as such, but we have OS's with AI built into it.
jmh117|2 years ago
LegitShady|2 years ago
They are stochastic parrots with a large complex training set, not reasoning.
YeGoblynQueenne|2 years ago
>>Today's AI models are missing the ability to reason abstractly, including asking and answering questions of "Why?" and "How?"
Your comment:
>> This claim seems over general, because you can ask gpt-4 'Why' and 'How' questions and it seems to do a pretty good job.
The article says today's AI models can't ask why and how. You say _you_ can ask why and how.
kec|2 years ago
kenjackson|2 years ago
paganel|2 years ago
According to Bard we did manage to defeat the Swedes by two goals to one back at the 1994 Euro Championships, which, to put it bluntly, is pretty damn far from the truth (the Swedes managed to go through to the World Cup semifinals after winning on penalty shoot-outs, the score had been 2-2 after 120 minutes).
I didn’t make any further inquiries, suffice is to say that there’s no “intelligence” in the concept of LLMs to speak of as long as it can’t even correctly answer a question that non-smart tech had been able answer correctly for years.
nomel|2 years ago
mmq|2 years ago
dontmobile|2 years ago
The overall state of LLMs can be distilled into 3 points:
1. LLMs Can produce output that is equal in intelligence and creativity to humans. It can even produce output that is objectively better than humans. This EVEN applies to novel responses that are completely absent from the training set. This is the main reason why there's so much hype around LLMs right now.
2. The main problem is that LLMs can't produce good output consistently. Sometimes the output is better, sometimes it's the same, sometimes it's the worse. LLMs sometimes "hallucinate", they are sometimes inconsistent, they have an obvious memory problems. But none of these problems completely preclude the LLM from being able to produce output that is objectively better or the same as human level reasoning... it's just not doing this consistently.
3. Nobody fully understands the internal state of LLMs. We have limited understanding of what's going on here. We can understand inputs and outputs but the internal thought process is not completely understood. Thus we can only make limited statements about how an LLM thinks. Nobody can make a statement that LLMs obviously have zero understanding of the world, nobody can make a statement that LLMs are just stochastic parrots because we don't really get whats going on internally.
We only have output from LLMs that are remarkably novel and intelligent and output from LLMs that are incredibly stupid and inconsistent. The data does not point towards a definitive conclusion, it only points towards possibilities.
There's actually a cargo cult around downplaying AI. There are people who say clearly the AI is a stochastic parrot and they point to the intention of the algorithm itself behind the LLM. Yes the algorithm at the lowest level can be thought of as a next text predictor. But this is just a low level explanation. It's like saying a computer system is simply a turing machine executing simplistic instructions from a tape roll when such instructions can form things like games and 3D simulations of entire open worlds. The high level characteristics of this AI is something we currently cannot understand. Yes we built a text predictor, but something else that was not expected came out as an emergent property and this emergent property is something we still cannot make a definitive statement about.
What does the future hold? What follows is my personal opinion on this matter: I believe we will never be able to make a definitive statement about LLMs or even AGI. We will never be able to fully understand these things and instead AGI will come about from a series of trials, errors and accidents. What we build will largely come about as an art and as unexpected emergent properties of trying different things.
I believe this for two reasons. The first reason is philosophical. There's this sort of blurry concept that I believe that a complex intelligence cannot fully comprehend something that is equal in complexity to itself. We can only partially understand complexity equal to ourselves by symbolically abstracting parts away but not everything can be abstracted like this. Sometimes true understanding involves comprehension of the entire complex crystal without abstracting any part of it away. I believe that the concept of "intelligence" is such a crystal, but that's just a guess.
The second reason is scientific. We've had physical creations of complex intelligence right in front of ours eyes that we can touch, manipulate and influence for decades. The human brain and other animal brains have been studied extensively and our understanding has been consistently far away from any form of true understanding. Given the evidence of the failure to understand the human brain even when it's right in front of us, I'd say we're unlikely to ever completely understand LLMs as well.
version_five|2 years ago
That's a bad analogy, none of those things are emergent behavior.
We can debate whether what an llm does is "emergent" - it's basically a definition thing though and isn't very interesting.
In reality, what's most surprising is that so much of what we say is explainable as next token prediction. It's not the other way around - we're showing how predictable we are, rather than how smart the AI is. But it's clear to me that it's in the outlying cases where the differences are. AI doesn't extrapolate outside it's training data, and even if it gets (100-\alpha)% of it's output right, there is always some alpha that's not in the training data and differentiates pattern matching or fancy key-value lookup (which is how we know AI works) from whatever intelligence is.
somewhereoutth|2 years ago
For such strong statements that they do have an understanding of the world, and are not simply stochastic parrots (arguably the null hypothesis), the burden of proof is on the LLM proponents. Precious little proof has been provided, and stating that nobody knows what goes on inside obviously does not add to that.
ryanjshaw|2 years ago
> The high level characteristics of this AI is something we currently cannot understand.
The lack of objectivity, creativity, imagination, and outright denial you see on HN around this topic is staggering.
chrisjj|2 years ago
You could have said the same about photocopiers, when they were just as novel - where you'd never seen the "creation"s original.
Generative "AI" became impressive only at the point its input funnel gained access to more original material than any individual impressee could truly comprehend.
Cargo cult, yes.
moffkalast|2 years ago
the_af|2 years ago
"Intelligence" is begging the question when the whole debate is about whether they are intelligent, and what intelligence even is.
"Creativity" however... consider my eyebrows raised. Where do you see LLMs surpassing humans in creativity?
> It can even produce output that is objectively better than humans.
Examples?
johnnyworker|2 years ago
I'd say the inability to do it consistently is because it's not reasoning.
mysterydip|2 years ago
bcherny|2 years ago
> What is 5*7?
> 5 multiplied by 7 equals 35.
> Why?
> Multiplication is a process of adding a number to itself a certain number of times. When you multiply 5 by 7, it's the same as saying "add 5 to itself 7 times." If you add up seven 5s (5 + 5 + 5 + 5 + 5 + 5 + 5), you get 35. That's why 5 multiplied by 7 equals 35.
> (7 “why?”s later…)
> The ultimate "why" behind many fundamental rules and concepts, including those in mathematics, can often circle back to human needs and the desire to understand and interact effectively with our world.
> As humans, we've developed systems and structures, such as mathematics, to organize, interpret, and predict our surroundings. These systems help us in practical tasks like trading goods, constructing buildings, and planning time, but also in abstract reasoning and problem solving. The rules within these systems, such as the operation of multiplication in mathematics, exist because they are useful, they work consistently, and they can be universally understood and applied.
> So, why does 5 multiplied by 7 equal 35, and why does multiplication work the way it does? Because humans, over many generations, found that this system was a powerful and reliable tool for interacting with the world and with each other.
GaggiX|2 years ago
meling|2 years ago
TeMPOraL|2 years ago
unknown|2 years ago
[deleted]