(no title)
atdt | 9 months ago
That may change, particularly if the intelligence of LLMs proves to be analogous to our own in some deep way—a point that is still very much undecided. However, if the similarities are there, so is the potential for knowledge. We have a complete mechanical understanding of LLMs and can pry apart their structure, which we cannot yet do with the brain. And some of the smartest people in the world are engaged in making LLMs smaller and more efficient; it seems possible that the push for miniaturization will rediscover some tricks also discovered by the blind watchmaker. But these things are not a given.
loveparade|9 months ago
I would push back on this a little bit. While it has not helped us to understand our own intelligence, it has made me question whether such a thing even exists. Perhaps there are no simple and beautiful natural laws, like those that exists in Physics, that can explain how humans think and make decisions. When CNNs learned to recognize faces through a series of hierarchical abstractions that make intuitive sense it's hard to deny the similarities to what we're doing as humans. Perhaps it's all just emergent properties of some messy evolved substrate.
The big lesson from the AI development in the last 10 years from me has been "I guess humans really aren't so special after all" which is similar to what we've been through with Physics. Theories often made the mistake of giving human observers some kind of special importance, which was later discovered to be the cause of theories not generalizing.
user_7832|9 months ago
Instead I would take the opposite take.
How wonderful is it, that with naturally evolved processes and neural structures, have we been able to create what we have. Van Gogh’s paintings came out of the human brain. The Queens of the Skies - hundreds of tons of metal and composites - flying across continents in the form of a Boeing 747 or an A380 - was designed by the human brain. We went to space, have studied nature (and have conservation programs for organisms we have found to need help), took pictures the pillars of creation that are so incredibly far… all with such a “puny” structure a few cm in diameter? I think that’s freaking amazing.
_glass|9 months ago
mykowebhn|9 months ago
Isn't Physics trying to describe the natural world? I'm guessing you are taking two positions here that are causing me confusion with your statement: 1) that our minds can be explained strictly through physical processes, and 2) our minds, including our intelligence, are outside of the domain of Physics.
If you take 1) to be true, then it follows that Physics, at least theoretically, should be able to explain intelligence. It may be intractably hard, like it might be intractably hard to have physics decribe and predict the motions of more than two planetary bodies.
I guess I'm saying that Physical laws ARE natural laws. I think you might be thinking that natural laws refer solely to all that messy, living stuff.
Balgair|9 months ago
> Perhaps there are no simple and beautiful natural laws, like those that exists in Physics, that can explain how humans think and make decisions...Perhaps it's all just emergent properties of some messy evolved substrate.
Yeah, it is very likely that there are not laws that will do this, it's the substrate. The fruit fly brain (let alone human) has been mapped, and we've figured out that it's not just the synapse count, but the 'weights' that matter too [0]. Mind you, those weights adjust in real time when a living animal is out there.
You'll see in literature that there are people with some 'lucky' form of hydranencephaly where their brain is as thin as paper. But they vote, get married, have kids, and for some strange reason seem to work in mailrooms (not a joke). So we know it's something about the connectome that's the 'magic' of a human.
My pet theory: We need memristors [2] to better represent things. But that takes redesigning the computer from the metal on up, so is unlikely to occur any time soon with this current AI craze.
> The big lesson from the AI development in the last 10 years from me has been "I guess humans really aren't so special after all" which is similar to what we've been through with Physics.
Yeah, biologists get there too, just the other way abouts, with animals and humans. Like, dogs make vitamin C internally, and humans have that gene too, it's just dormant, ready for evolution (or genetic engineering) to reactivate. That said, these neuroscience issues with us and the other great apes are somewhat large and strange. I'm not big into that literature, but from what little I know, the exact mechanisms and processes that get you from tool using ourangs to tool using humans, well, those seem to be a bit strange and harder to grasp for us. Again, not in that field though.
In the end though, humans are special. We're the only ones on the planet that ever really asked a question. There's a lot to us and we're actually pretty strange in the end. There's many centuries of work to do with biology, we're just at the wading stage of that ocean.
[0] https://en.wikipedia.org/wiki/Drosophila_connectome
[1] https://en.wikipedia.org/wiki/Hydranencephaly
[2] https://en.wikipedia.org/wiki/Memristor
lenkite|9 months ago
imadierich|9 months ago
[deleted]
csomar|9 months ago
I was reading a reddit post the other day where the guy lost his crypto holdings because he input his recovery phrase somewhere. We question the intelligence of LLMs because they might open a website, read something nefarious, and then do it. But here we have real humans doing the exact same thing...
> I guess humans really aren't so special after all
No they are not. But we are still far from getting there with the current LLMs and I suspect mimicking the human brain won't be the best path forward.
godelski|9 months ago
I'm assuming this was the part you were saying he doesn't hold, because it is pretty clear he holds the second thought.
I have a difficult time reading this as saying that LLMs aren't fantastic and useful. This seems to be the core of his conversation. That he's talking about the side of science, not engineering.PeterStuer|9 months ago
It is as if a biochemist looks at a human brain, and concludes there is no 'intelligence' there at all, just a whole lot of electro-chemical reactions. It fully ignores the potential for emergence.
Don't misunderstand me, I'm not saying 'AGI has arrived', but I'd say even current LLM's do most certainly have interesting lessons for Human Language development and evolution in science. What can the success in transfer learning in these models contribute to the debates on universal language faculties? How do invariants correlated across LLM systems and humans?
Barrin92|9 months ago
There's two kinds of emergence, one scientific, the other a strange, vacuous notion in the absence of any theory and explanation.
The first case is emergence when we for example talk about how gas or liquid states, or combustibility emerge from certain chemical or physical properties of particles. It's not just that they're emergent, we can explain how they're emergent and how their properties are already present in the lower level of abstraction. Emergence properly understood is always reducible to lower states, not some magical word if you don't know how something works.
In these AI debates that's however exactly how "emergence" is used, people just assert it, following necessarily from their assumptions. They don't offer a scientific explanation. (the same is true with various other topics, like consciousness, or what have you). This is pointless, it's a sort of god of the gaps disguised as an argument. When Chomsky talks about science proper, he correctly points out that these kinds of arguments have no place in it, because the point of science is to build coherent theories.
rf15|9 months ago
People's illusions and willingness to debase their own authority and control to take shortcuts to optimise towards lowest effort / highest yield (not dissimilar to something you would get with... auto regressive models!) was an astonishing insight to me.
OccamsMirror|9 months ago
At some point you have to wonder: is an LLM making your hiring decision really better than rolling a dice? At least the dice doesn't give you the illusion of rationality, it doesn't generate a neat sounding paragraph "explaining" why candidate A is the obvious choice. The LLM produces content that looks like reasoning but has no actual causal connection to the decision - it's a mimicry of explanation without true substance of causation.
You can argue that humans do the same thing. But post-hoc reasoning is often a feedback loop for the eventual answer. That's not the case for LLMs.
xron|9 months ago
However, a paper published last year (Mission: Impossible Language Models, Kallini et al.) proved that LLMs do NOT learn impossible languages as easily as they learn possible languages. This undermines everything that Chompsky says about LLMs in the linked interview.
ComposedPattern|9 months ago
Also, GPT-2 actually seems to do quite well on some of the tested languages, including word-hop, partial reverse, and local-shuffle. It doesn't do quite as well as plain English, but GPT-2 was designed to learn English, so it's not surprising that it would do a little better. For instance, they tokenization seems biased towards English. They show "bookshelf" becoming the tokens "book", "sh", and "lf" – which in many of the languages get spread throughout a sentence. I don't think a system designed to learn shuffled-English would tokenize this way!
https://aclanthology.org/2024.acl-long.787.pdf
foobarqux|9 months ago
AfterHIA|9 months ago
The AI works on English, C++, Smalltalk, Klingon, nonsense, and gibberish. Like Turing's paper this illustrates the difference between, "machines being able to think" and, "machines being able to demonstrate some well understood mathematical process like pattern matching."
https://en.wikipedia.org/wiki/Computing_Machinery_and_Intell...
fooker|9 months ago
Science progresses in a manner that when you see it happen in front of you it doesn't seem substantial at all, because we typically don't understand implications of new discoveries.
So far, in the last few years, we have discovered the importance of the role of language behind intelligence. We have also discovered quantitative ways to describe how close one concept is from another. More recently, from the new reasoning AI models, we have discovered something counterintuitive that's also seemingly true for human reasoning--incorrect/incomplete reasoning can often reach the correct conclusion.
lamp_book|9 months ago
People are waiting for this Prometheus-level moment with AI where it resembles us exactly but exceeds our capabilities, but I don't think that's necessary. It parallels humanity explaining Nature in our own image as God and claiming it was the other way around.
hulitu|9 months ago
First, they have to implement "intelligence" for LLMs, then we can compare. /s