top | item 44089156

Chomsky on what ChatGPT is good for (2023)

286 points| mef | 9 months ago |chomsky.info

359 comments

order

Some comments were deferred for faster rendering.

atdt|9 months ago

The level of intellectual engagement with Chomsky's ideas in the comments here is shockingly low. Surely, we are capable of holding these two thoughts: one, that the facility of LLMs is fantastic and useful, and two, that the major breakthroughs of AI this decade have not, at least so far, substantially deepened our understanding of our own intelligence and its constitution.

That may change, particularly if the intelligence of LLMs proves to be analogous to our own in some deep way—a point that is still very much undecided. However, if the similarities are there, so is the potential for knowledge. We have a complete mechanical understanding of LLMs and can pry apart their structure, which we cannot yet do with the brain. And some of the smartest people in the world are engaged in making LLMs smaller and more efficient; it seems possible that the push for miniaturization will rediscover some tricks also discovered by the blind watchmaker. But these things are not a given.

loveparade|9 months ago

> AI this decade have not, at least so far, substantially deepened our understanding of our own intelligence and its constitution

I would push back on this a little bit. While it has not helped us to understand our own intelligence, it has made me question whether such a thing even exists. Perhaps there are no simple and beautiful natural laws, like those that exists in Physics, that can explain how humans think and make decisions. When CNNs learned to recognize faces through a series of hierarchical abstractions that make intuitive sense it's hard to deny the similarities to what we're doing as humans. Perhaps it's all just emergent properties of some messy evolved substrate.

The big lesson from the AI development in the last 10 years from me has been "I guess humans really aren't so special after all" which is similar to what we've been through with Physics. Theories often made the mistake of giving human observers some kind of special importance, which was later discovered to be the cause of theories not generalizing.

godelski|9 months ago

  > one, that the facility of LLMs is fantastic and useful
I didn't see where he was disagreeing with this.

I'm assuming this was the part you were saying he doesn't hold, because it is pretty clear he holds the second thought.

  | is it likely that programs will be devised that surpass human capabilities? We have to be careful about the word “capabilities,” for reasons to which I’ll return. But if we take the term to refer to human performance, then the answer is: definitely yes.
I have a difficult time reading this as saying that LLMs aren't fantastic and useful.

  | We can make a rough distinction between pure engineering and science. There is no sharp boundary, but it’s a useful first approximation. Pure engineering seeks to produce a product that may be of some use. Science seeks understanding.
This seems to be the core of his conversation. That he's talking about the side of science, not engineering.

PeterStuer|9 months ago

It indeed baffles me how academics overall seem so dismissive of recent breakthroughs in sub-symbolic approaches as models from which we can learn about 'intelligence'?

It is as if a biochemist looks at a human brain, and concludes there is no 'intelligence' there at all, just a whole lot of electro-chemical reactions. It fully ignores the potential for emergence.

Don't misunderstand me, I'm not saying 'AGI has arrived', but I'd say even current LLM's do most certainly have interesting lessons for Human Language development and evolution in science. What can the success in transfer learning in these models contribute to the debates on universal language faculties? How do invariants correlated across LLM systems and humans?

rf15|9 months ago

> the major breakthroughs of AI this decade have not, at least so far, substantially deepened our understanding of our own intelligence and its constitution

People's illusions and willingness to debase their own authority and control to take shortcuts to optimise towards lowest effort / highest yield (not dissimilar to something you would get with... auto regressive models!) was an astonishing insight to me.

xron|9 months ago

Chompsky's central criticism of LLMs is that they can learn impossible languages just as easily as they learn possible languages. He refers to this repeatedly in the linked interview. Therefore, they cannot teach us about our own intelligence.

However, a paper published last year (Mission: Impossible Language Models, Kallini et al.) proved that LLMs do NOT learn impossible languages as easily as they learn possible languages. This undermines everything that Chompsky says about LLMs in the linked interview.

AfterHIA|9 months ago

What exactly do you mean, "analogous to our own" and, "in a deep way" without making an appeal to magic or non-yet discovered fields of science? I understand what you're saying but when you scrutinize these things you end up in a place that's less scientific than one might think. That kind of seems to be one of Chomsky's salient points; we really, really need to get a handle on when we're doing science in the contemporary Kuhnian sense and philosophy.

The AI works on English, C++, Smalltalk, Klingon, nonsense, and gibberish. Like Turing's paper this illustrates the difference between, "machines being able to think" and, "machines being able to demonstrate some well understood mathematical process like pattern matching."

https://en.wikipedia.org/wiki/Computing_Machinery_and_Intell...

fooker|9 months ago

> not, at least so far, substantially deepened our understanding of our own intelligence

Science progresses in a manner that when you see it happen in front of you it doesn't seem substantial at all, because we typically don't understand implications of new discoveries.

So far, in the last few years, we have discovered the importance of the role of language behind intelligence. We have also discovered quantitative ways to describe how close one concept is from another. More recently, from the new reasoning AI models, we have discovered something counterintuitive that's also seemingly true for human reasoning--incorrect/incomplete reasoning can often reach the correct conclusion.

lamp_book|9 months ago

In my opinion it will or already has redefined our conceptual models of intelligence - just like physical models of atoms or gravitational mechanics evolved and newer models replace the older. The older models aren't invalidated (all models are wrong, after all), but their limits are better understood.

People are waiting for this Prometheus-level moment with AI where it resembles us exactly but exceeds our capabilities, but I don't think that's necessary. It parallels humanity explaining Nature in our own image as God and claiming it was the other way around.

hulitu|9 months ago

> if the intelligence of LLMs proves to be analogous to our own in some deep way

First, they have to implement "intelligence" for LLMs, then we can compare. /s

papaver-somnamb|9 months ago

There was an interesting debate where Chomsky took a position on intelligence being rooted in symbolic reasoning and Asimov asserted a statistical foundation (ah, that was not intentional ;).

LLM designs to date are purely statistical models. A pile, a morass of floating point numbers and their weighted relationships, along with the software and hardware that animates them and the user input and output that makes them valuable to us. An index of the data fed into them, different from a Lucene or SQL DB index made from compsci algorithms & data structure primitives. Recognizable to Azimov's definition.

And these LLMs feature no symbolic reasoning whatsoever within their computational substrate. What they do feature is a simple recursive model: Given the input so far, what is the next token? And they are thus enabled after training on huge amounts of input material. No inherent reasoning capabilities, no primordial ability to apply logic, or even infer basic axioms of logic, reasoning, thought. And therefore unrecognizable to Chomsky's definition.

So our LLMs are a mere parlor trick. A one-trick pony. But the trick they do is oh-so vastly complicated, and very appealing to us, of practical application and real value. It harkens back to the question: What is the nature of intelligence? And how to define it?

And I say this while thinking of the marked contrast of apparent intelligence between an LLM and say a 2-year age child.

sdwr|9 months ago

That's not true, symbols emerge out of the statistics. Just look at the imagenet analysis that identified distinct concepts in different layers, or the experiments with ablation in LLMs.

They may not be doing strict formal logic, but they are definitely compressing information into, and operating using, symbols.

dahcryn|9 months ago

To me the interesting idea is the followup question: Can you do complex reasoning without intelligence?

LLM's seem to have proven themselves to be more than a one-trick-pony. There is actually some resemblance of reasoning and structuring etc.. No matter if directly within the LLM, or supported by computer code. E.g it can be argued that the latest LLMs like Gemini 2.5 and Claude 4 in fact do complex reasoning.

We have always taken for granted you need intelligence for that, but what if you don't? It would greatly change our view on intelligence and take away one of the main factors that we test for in e.g. animals to define their "intelligence".

DarknessFalls|9 months ago

I think we are ignoring that the statistical aspect of our ability to reason effectively and to apply logic was predicated on the deaths of millions of our ancestors. When they made the wrong decision, they likely didn't reproduce. When they made the right decision, that particular configuration of their cortical substrate was carried forward a generation. The product of this cross-generational training could have easily led to non-intelligence, and often does, but we have survivor's bias in our favor.

tmzt|9 months ago

Perhaps the next question we are asking is "what happens if you give a statistical model symbolic input" and the answer appears to be, you get symbolic output.

Even more strangely, the act of giving a statistical model symbolic input allows it to build a context which then shapes the symbolic output in a way that depends on some level of "understanding" instructions.

We "train" this model on raw symbolic data and it extracts the inherent semantic structure without any human ever embedding in the code anything resembling letters, words, or the like. It's as if Chomsky's elusive universal language is semantic structure itself.

Xmd5a|9 months ago

>There was an interesting debate where Chomsky took a position on intelligence being rooted in symbolic reasoning and Asimov asserted a statistical foundation (ah, that was not intentional ;).

Chomsky vs Norvig

https://norvig.com/chomsky.html

marcosdumay|9 months ago

> Chomsky took a position on intelligence being rooted in symbolic reasoning and Asimov asserted a statistical foundation

I dunno if people knew it at that time, but those two views are completely equivalent.

otabdeveloper4|9 months ago

> and very appealing to us

Yes, because anthropomorphism is hardwired into our biology. Just two dots and an arc triggers a happy feeling in all humans. :)

> of practical application and real value

That is debatable. So far no groundbreaking useful applications have been found for LLMs. We want to believe, because they make us feel happy. But the results aren't there.

zombot|9 months ago

The voice of reason. And, as always, the voice of reason is being vigorously ignored. Dreams of big profits and exerting control through generated lies are irresistible. And among others, HN comment threads demonstrate how even people who should know better are falling for it in droves. In fact this very thread shows how Chomsky's arguments fall on deaf ears.

OccamsMirror|9 months ago

Don't forget exerting control through automated surveillance. What a wonderful tool we have created for detecting whether citizens step out of line without needing giant offices full of analysts.

calibas|9 months ago

The fact that we have figured out how to translate language into something a computer can "understand" should thrill linguists. Taking a word (token) and abstracting it's "meaning" as a 1,000-dimension vector seems like something that should revolutionize the field of linguistics. A whole new tool for analyzing and understanding the underlying patterns of all language!

And there's a fact here that's very hard to dispute, this method works. I can give a computer instructions and it "understands" them in a way that wasn't possible before LLMs. The main debate now is over the semantics of words like "understanding" and whether or not an LLM is conscious in the same way as a human being (it isn't).

krackers|9 months ago

Restricted to linguistics, LLM's supposed lack of understanding should be a non-sequitur. If the question is whether LLMs have formed a coherent ability to parse human languages, the answer is obviously yes. In fact not just human languages, as seen with multimodality the same transformer architecture seems to work well to model and generate anything with inherent structure.

I'm surprised that he doesn't mention "universal grammar" once in that essay. Maybe it so happens that humans do have some innate "universal grammar" wired in by instinct but it's clearly not _necessary_ to be able to parse things. You don't need to set up some explicit language rules or generative structure, enough data and the model learns to produce it. I wonder if anyone has gone back and tried to see if you can extract out some explicit generative rules from the learned representation though.

Since the "universal grammar" hypothesis isn't really falsifiable, at best you can hope for some generalized equivalent that's isomorphic to the platonic representation hypothesis and claim that all human language is aligned in some given latent representation, and that our brains have been optimized to be able to work in this subspace. That's at least a testable assumption, by trying to reverse engineer the geometry of the space LLMs have learned.

catigula|9 months ago

Unfortunately you've undermined your point by making sweeping claims about something that is the literal hardest known problem in philosophy (consciousness).

I'm not actually comfortable saying that LLMs aren't conscious. I think there's a decent chance they could be in a very alien way.

I realize that this is a very weird and potentially scary claim for people to parse but you must understand how weird and scary consciousness is.

belter|9 months ago

> whether or not an LLM is conscious in the same way as a human being

The problem is... that there is a whole amount of "smart" activities humans do without being conscious of it.

- Walking, riding a bike, or typing on a keyboard happen fluidly without conscious planning of each muscle movement.

- You can finish someone sentence or detect if a sentence is grammatically wrong, often without being able to explain the rule.

- When you enter a room, your brain rapidly identifies faces, furniture, and objects without you consciously thinking, “That is a table,” or “That is John.”

qwery|9 months ago

Why would that thrill linguists? I'm not saying it hasn't/wouldn't/shouldn't, but I don't see why this technology would have the dramatic impact you imagine.

Is/was the same true for ASCII/Smalltalk/binary? They are all another way to translate language into something the computer "understands".

Perhaps the fact that it hasn't would lead some to question the validity of their claims. When a scientist makes a claim about how something works, it's expected that they prove it.

If the technology is as you say, show us.

automatoney|9 months ago

Word embeddings (that 1000-dimension vector you mention) are not new. No comment on the rest of your comment, but that aspect of LLMs is "old" tech - word2vec was published 11 years ago.

godelski|9 months ago

  > The fact that we have figured out how to translate language into something a computer can "understand" should thrill linguists. 
I think they are really excited by this. There seems no deficiency of linguists using these machines.

But I think it is important to distinguish the ability to understand language and translate it. Enough that you yourself put quotes around "understanding". This can often be a challenge for many translators, not knowing how to properly translate something because of underlying context.

Our communication runs far deeper than the words we speak or write on a page. This is much of what linguistics is about, this depth. (Or at least that's what they've told me, since I'm not a linguist) This seems to be the distinction Chomsky is trying to make.

  > The main debate now is over the semantics of words like "understanding" and whether or not an LLM is conscious in the same way as a human being (it isn't).
Exactly. Here, I'm on the side of Chomsky and I don't think there's much of a debate to be had. We have a long history of being able to make accurate predictions while erroneously understanding the underlying causal nature.

My background is physics, and I moved into CS (degrees in both), working on ML. I see my peers at the top like Hinton[0] and Sutskever[1] making absurd claims. I call them absurd, because it is a mistake we've made over and over in the field of physics[2,3]. One of those lessons you learn again and again, because it is so easy to make the mistake. Hinton and Sutskever say that this is a feature, not a bug. Yet we know it is not enough to fit the data. Fitting the data allows you to make accurate, testable predictions. But it is not enough to model the underlying causal structure. Science has a long history demonstrating accurate predictions with incorrect models. Not just in the way of the Relativity of Wrong[4], but more directly. Did we forget that the Geocentric Model could still be used to make good predictions? Copernicus did not just face resistance from religious authorities, but also academics. The same is true for Galileo, Boltzmann, Einstein and many more. People didn't reject their claims because they were unreasonable. They rejected the claims because there were good reasons to. Just... not enough to make them right.

[0] https://www.reddit.com/r/singularity/comments/1dhlvzh/geoffr...

[1] https://www.youtube.com/watch?v=Yf1o0TQzry8&t=449s

[2] https://www.youtube.com/watch?v=hV41QEKiMlM

[3] Think about what Fermi said in order to understand the relevance of this link: https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness...

[4] https://hermiene.net/essays-trans/relativity_of_wrong.html

gerdesj|9 months ago

"The fact that we have figured out how to translate language into something a computer can "understand" should thrill linguists."

No, there is no understanding at all. Please don't confuse codifying with understanding or translation. LLMs don't understand their input, they simply act on it based on the way they are trained on it.

"And there's a fact here that's very hard to dispute, this method works. I can give a computer instructions and it "understands" them "

No, it really does not understand those instructions. It is at best what used to be called an "idiot savant". Mind you, people used to describe others like that - who is the idiot?

Ask your favoured LLM to write a programme in a less used language - ooh let's try VMware's PowerCLI (it's PowerShell so quite popular) and get it to do something useful. It wont because it can't but it will still spit out something. PowerCLI is not extant across Stackoverflow and co much but it is PS based so the LLMs will hallucinate madder than a hippie on a new super weed.

visarga|9 months ago

Brains don't have innate grammar more than languages are selected to fit baby brains. Chomsky got it backwards, languages co-evolved with human brains to fit our capacities and needs. If a language is not useful or can't be learned by children, it does not expand, it just disappears.

It's like wondering how well your shoes fit your feet, forgetting that shoes are made and chosen to fit your feet in the first place.

suddenlybananas|9 months ago

It's not an either/or. The fact any human language is learnable by any human and not by, say, chimpanzees needs explaining.

Chomsky also talks about these kind of things in detail in Hauser, Chomsky and Fitch (2002) where they describe them as "third factors" in language acquisition.

qwery|9 months ago

You could say that languages developed ("evolved") to fit the indisputable human biological faculty for language.

asmeurer|9 months ago

It's amusing that he argues (correctly) that "there is no Great Chain of Being with humans at the top," but then claims that LLMs cannot tell us anything about language because they can learn "impossible languages" that infants cannot learn. Isn't that an anthropomorphic argument, saying that what a language is inherently defined by human cognition?

tgv|9 months ago

When Chomsky says "language," he means "natural/human language," not e.g. /[ab]*/ or prime numbers.

foobarqux|9 months ago

Yes, studying human language is actually inherently defined by what humans do, just -- as he points out, if you could understand the article -- studying insect navigation is defined by what insects do and not what navigation systems human could design.

lucisferre|9 months ago

"The desert ants in my backyard have minuscule brains, but far exceed human navigational capacities, in principle, not just performance. There is no Great Chain of Being with humans at the top."

This quote brought to mind the very different technological development path of the spider species in Adrian Tchaikovsky's Children of Time. They used pheromones to 'program' a race of ants to do computation.

lostmsu|9 months ago

I don't know what he's talking about. Humans clearly outperform ants in navigation. Especially if you allow arbitrary markings on the terrain.

Sounds like "ineffable nature" mumbo-jumbo.

teleforce|9 months ago

>Many biological organisms surpass human cognitive capacities in much deeper ways. The desert ants in my backyard have minuscule brains, but far exceed human navigational capacities, in principle, not just performance. There is no Great Chain of Being with humans at the top.

Chomsky made interesting points regarding the performance of AI with the performance of biological organisms in comparison to human but his conclusion is not correct. We already know that cheetah run faster human and elephant is far stronger than human. Bat can navigate in the dark with echo location and dolphin can hunt in synchronization with high precision coordination in pack to devastating effect compared to silo hunting.

Whether we like or not human is the the top unlike the claim of otherwise by Chomsky. By scientific discovery (understanding) and designing (engineering) by utilizing law of nature, human can and has surpassed all of the cognitive capabilities of these petty animals, and we're mostly responsible for their inevitable demise and extinction. Human now need to collectively and consciously reverse the extinction process of these "superior" cognitive animals in order to preserve these animals for better or worst. No other earth bound creature can do that to us.

mrmdp|9 months ago

Chomsky has the ability to say things in a way that most laypersons of average intelligence can grasp. That is an important skill for communication of one's thoughts to the general populace.

Many of the comments herein lack that feature and seem to convey that the author might be full of him(her)self.

Also, some of the comment are a bit pejorative.

bawana|9 months ago

I once heard that a roomful of monkeys with typewriters given infinite time could type out the works of shakespeare. I dont think that's true any more than the random illumiination of pixels on a screen could eventually generate a picture.

OTOH, consider LLMs as a roomful of monkeys that can communicate to each other, look at words,sentences and paragraphs on posters around the room with a human in the room that gives them a banana when they type out a new word, sentence or paragraph.

You may eventually get a roomful of monkeys that can respond to a new sentence you give them with what seems an intelligent reply. And since language is the creation of humans, it represents an abstraction of the world made by humans.

ggm|9 months ago

Always a polarising figure, responses here bisect along several planes. I am sure some come armed to disagree because of his life long affinity to left world view, others to defend because of his centrality to theories of language.

I happen to agree with his view, so i came armed to agree and read this with a view in mind which I felt was reinforced. People are overstating the AGI qualities and misapplying the tool, sometimes the same people.

In particular, the lack of theory, and scientific method means both we're, not learning much, and we've rei-ified the machine.

I was disappointed nothing said of Norbert Weiner. A man who invented cybernetics and had the courage to stand up to the military industrial complex.

skydhash|9 months ago

Quite a nice overview. For almost any specific measure, you can find something that is better than human at that point. And now LLMs architecture have made possible for computers to produce complete and internally consistent paragraphs of text, by rehashing all the digital data that can be found on the internet.

But what we're good as using all of our capabilities to transform the world around us according to an internal model that is partially shared between individuals. And we have complete control over that internal model, diverging from reality and converging towards it on whims.

So we can't produce and manipulate text faster, but rarely the end game is to produce and manipulate text. Mostly it's about sharing ideas and facts (aka internal models) and the control is ultimately what matters. It can help us, just like a calculator can help us solve an equation.

EDIT

After learning to draw, I have that internal model that I switch to whenever I want to sketch something. It's like a special mode of observation, where you no longer simply see, but pickup a lot of extra details according to all the drawing rules you internalized. There's not a lot, they're just intrinsically connected with each other. The difficult part is hand-eye coordination and analyzing the divergences between what you see and the internal model.

I think that's why a lot of artists are disgusted with AI generators. There's no internal models. Trying to extract one from a generated picture is a futile exercice. Same with generated texts. Alterations from the common understanding follows no patterns.

randmeerkat|9 months ago

> It can help us, just like a calculator can help us solve an equation.

A calculator is consistent and doesn’t “hallucinate” answers to equations. An LLM puts an untrustworthy filter between the truth and the person. Google was revolutionary because it increased access to information. LLMs only obscure that access, while pretending to be something more.

suddenlybananas|9 months ago

>For almost any specific measure, you can find something that is better than human at that point.

Learning language from small data.

schoen|9 months ago

(2023)

asveikau|9 months ago

From what I've heard, Chomsky had a stroke which impacted his language. You will, unfortunately, not hear a recent opinion from him on current developments.

BryanLegend|9 months ago

Yean, a lot has happened in two years

ashoeafoot|9 months ago

Chat Gpt can write great apolgia for blood thirsty landempires and never live that down :

"To characterize a structural analysis of state violence as “apologia” reveals more about prevailing ideological filters than about the critique itself. If one examines the historical record without selective outrage, the pattern is clear—and uncomfortable for all who prefer myths to mechanisms." the fake academic facade, the us diabolism, the unwillingness to see complexity and responsibility in other its all with us forever ..

caycep|9 months ago

Is Chomsky really "one of the most esteemed public intellectuals of all time"? Aristotle and Beyoncé might want to have a word

r00sty|9 months ago

I imagine his opinions might have changed by now. If we're still residing in 2023, I would be inclined to agree with him. Today, in 2025 however, LLMs are just another tool being used to "reduce labor costs" and extract more profit from the humans left who have money. There will be no scientific developments if things continue in this manner.

oysterville|9 months ago

Two year old interview should be labeled as such.

Amadiro|9 months ago

In my view, there is a major flaw in his argument is his distinction into pure engineering and science:

> We can make a rough distinction between pure engineering and science. There is no sharp boundary, but it’s a useful first approximation. Pure engineering seeks to produce a product that may be of some use. Science seeks understanding. If the topic is human intelligence, or cognitive capacities of other organisms, science seeks understanding of these biological systems.

If you take this approach, of course it follows that we should laugh at Tom Jones.

But a more differentiated approach is to recognize that science also falls into (at least) two categories; the science that we do because it expands our capability into something that we were previously incapable of, and the one that does not. (we typically do a lot more of the former than the latter, for obvious practical reasons)

Of course it is interesting from a historical perspective to understand the seafaring exploits of Polynesians, but as soon as there was a better way of navigating (i.e. by stars or by GPS) the investigation of this matter was relegated to the second type of science, more of a historical kind of investigation. Fundamentally we investigate things in science that are interesting because we believe the understanding we can gain from it can move us forwards somehow.

Could it be interesting to understand how Hamilton was thinking when he came up with imaginary numbers? Sure. Are a lot of mathematicians today concerning themselves with studying this? No, because the frontier has been moved far beyond.*

When you take this view, it´s clear that his statement

> These considerations bring up a minor problem with the current LLM enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at once. But there are much more serious problems than absurdity.

is not warranted. Consider the following, in his own analogy:

> These considerations bring up a minor problem with the current GPS enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at ones. But there are much more serious problems than absurdity. One is that GPS systems are designed in such a way that they cannot tell us anything about navigation, planning routes or other aspects of orientation, a matter of principle, irremediable.

* I´m making a simplifying assumption here that we can´t learn anything useful for modern navigation anymore from studying Polynesians or ants; this might well be untrue, but that is also the case for learning something about language from LLMs, which according to Chomsky is apparently impossible and not even up for debate.

unkulunkulu|9 months ago

I came to comments to ask a question, but considering that it is two days old already, I will try to ask you in this thread.

What you think about his argument about “not being able to distinguish possible language from impossible”?

And why is it inherent in ML design?

Does he assume that there could be such an instrument/algorithm that could do that with a certainty level higher than LLM/some ml model?

I mean, certainly they can be used to make a prediction/answer to this question, but he argues that this answer has no credibility? I mean, LLM is literally a model, ie probability distribution over what is language and what is not, what gives?

Current models are probably tuned more “strictly” to follow existing languages closely, ie that will say “no-no” to some yet-unknown language, but isn’t this improvable in theory?

Or is he arguing precisely that this “exterior” is not directly correlated with “internal processes and faculties” and cannot make such predictions in principle?

titzer|9 months ago

All this interview proves is that Chomsky has fallen far, far behind how AI systems work today and is retreating to scoff at all the progress machine learning has achieved. Machine learning has given rise to AI now. It can't explain itself from principles or its architecture. But you couldn't explain your brain from principles or its architecture, you'd need all of neuroscience to do it. Because the brain is digital and (probably) does not reason like our brains do, it somehow falls short?

While there's some things in this I find myself nodding along to in this, I can't help but feel it's an a really old take that is super vague and hand-wavy. The truth is that all of the progress on machine learning is absolutely science. We understand extremely well how to make neural networks learn efficiently; it's why the data leads anywhere at all. Backpropagation and gradient descent are extraordinarily powerful. Not to mention all the "just engineering" of making chips crunch incredible amounts of numbers.

Chomsky is extremely ungenerous to the progress and also pretty flippant about what this stuff can do.

I think we should probably stop listening to Chomsky; he hasn't said anything here that he hasn't already say a thousand times for decades.

cj|9 months ago

> Not to mention all the "just engineering" of making chips crunch incredible amounts of numbers.

Are LLM's still the same black box as they were described as a couple years ago? Are their inner workings at least slightly better understood than in the past?

Running tens of thousands of chips crunching a bajillion numbers a second sounds fun, but that's not automatically "engineering". You can have the same chips crunching numbers with the same intensity just to run an algorithm to run a large prime number. Chips crunching numbers isn't automatically engineering IMO. More like a side effect of engineering? Or a tool you use to run the thing you built?

What happens when we build something that works, but we don't actually know how? We learn about it through trial and error, rather than foundational logic about the technology.

Sorta reminds me of the human brain, psychology, and how some people think psychology isn't science. The brain is a black box kind of like a LLM? Some people will think it's still science, others will have less respect.

This perspective might be off base. It's under the assumption that we all agree LLM's are a poorly understood black box and no one really knows how they truly work. I could be completely wrong on that, would love for someone else to weigh in.

Separately, I don't know the author, but agreed it reads more like a pop sci book. Although I only hope to write as coherently as that when I'm 96 y/o.

tgv|9 months ago

> But you couldn't explain your brain from principles or its architecture, you'd need all of neuroscience to do it

That's not a good argument. Neuroscience was constructed by (other) brains. The brain is trying to explain itself.

> The truth is that all of the progress on machine learning is absolutely science.

But not much if you're interested in finding out how our brain works, or how language works. One of the interesting outcomes of LLMs is that there apparently is a way to represent complex ideas and their linguistic connection in a (rather large) unstructured state, but it comes without thorough explanation or relation to the human brain.

> Chomsky is [...] pretty flippant about what this stuff can do.

True, that's his style, being belligerently verbose, but others have been pretty much fawning and drooling over a stochastic parrot with a very good memory, mostly with dollar signs in their eyes.

lxgr|9 months ago

> [...] I can't help but feel it's an a really old take [...]

To be fair the article is from two years ago, which when talking about LLMs in this age arguably does count as "old", maybe even "really old".

jbentley1|9 months ago

"I think we should probably stop listening to Chomsky"

I've been saying this my whole life, glad it's finally catching on

rxtexit|9 months ago

It really shouldn't be hard to understand that a titan of a field has forgot more than what an arm chair enthusiast knows.

I remember having thoughts like this until I listened to him talk on a podcast for 3 hours about chatGPT.

What was most obvious is Chomsky really knows linguistics and I don't.

"What Kind of Creatures Are We?" is good place to start.

We should take having Chomsky still around to comment on LLMs as one of the greatest intellectual gifts.

Much before listening to his thoughts on LLMs was me projecting my disdain for his politics.

l5870uoo9y|9 months ago

Perhaps it should be mentioned that he is 96 years old.

foobarqux|9 months ago

> The truth is that all of the progress on machine learning is absolutely science

It is not science, which is the study of the natural world. You are using the word "science" as an honorific, meaning something like "useful technical work that I think is impressive".

The reason you are so confused is that you can't distinguish studying the natural world from engineering.

prpl|9 months ago

Reminds me of SUSY, string theory, the standard model, and beyond that, string theory etc…

What is elegant as a model is not always what works, and working towards a clean model to explain everything from a model that works is fraught, hard work.

I don’t think anyone alive will realize true “AGI”, but it won’t matter. You don’t need it, the same way particle physics doesn’t need elegance

LudwigNagasena|9 months ago

That was a weird ride. He was asked whether AI will outsmart humans and went on a rant about philosophy of science seemingly trying to defend the importance of his research and culminated with some culture war commentary about postmodernism.

dmvdoug|9 months ago

There are lots of stories about Chomsky ranting and wielding his own disciplinary authority to maintain himself as center of the field.

retskrad|9 months ago

It’s time to stop writing in this elitist jargon. If you’re communicating and few people understands you, then you’re a bad communicator. I read the whole thing and thought: wait, was there a new thought or interesting observation here? What did we actually learn?

thomassmith65|9 months ago

I have problems with Noam Chomsky, but certainly none with his ability to communicate. He is a marvel at speaking extemporaneously in a precise and clear way.

mmooss|9 months ago

Where do you see 'elitist jargon'? That didn't even cross my mind.

jdkee|9 months ago

foldr|9 months ago

Most likely not. This is one of his weird pieces co-authored with Jeffrey Watamull. I don’t doubt that he put his name on it voluntarily, but it reads much more like Watamull than Chomsky. The views expressed in the interview we’re commenting on are much more Chomsky-like.

submeta|9 months ago

Chomsky’s notion is: LLMs can only imitate, not understand language. But what exactly is understanding? What if our „understanding“ is just unlocking another level in a model? Unlocking a new form of generation?

roughly|9 months ago

> But what exactly is understanding?

He alludes to quite a bit here - impossible languages, intrinsic rules that don’t actually express in the language, etc - that leads me to believe there’s a pretty specific sense by which he means “understanding,” and I’d expect there’s a decent literature in linguistics covering what he’s referring to. If it’s a topic of interest to you, chasing down some of those leads might be a good start.

(I’ll note as several others have here too that most of his language seems to be using specific linguistics terms of art - “language” for “human language” is a big tell, as is the focus on understanding the mechanisms of language and how humans understand and generate languages - I’m not sure the critique here is specifically around LLMs, but more around their ability to teach us things about how humans understand language.)

npteljes|9 months ago

I have trouble with the notion "understanding". I get the usefulness of the word, but I don't think that we are capable to actually understand. I also think that we are not even able to test for understanding - a good imitation is as good as understanding. Also, understanding has limits. In school, they often say on class that you should forget whatever you have been taught so far, because this new layer of knowledge that they are about to teach you. Was the previous knowledge not "understanding" then? Is the new one "understanding"?

If we define "understanding" like "useful", as in, not an innate attribute, but something in relation to a goal, then again, a good imitation, or a rudimentary model can get very far. ChatGPT "understood" a lot of things I have thrown at it, be that algorithms, nutrition, basic calculations, transformation between text formats, where I'm stuck in my personal development journey, or how to politely address people in the email I'm about to write.

>What if our „understanding“ is just unlocking another level in a model?

I believe that it is - that understanding is basically an illusion. Impressions are made up from perceptions and thinking, and extrapolated over the unknown. And just look how far that got us!

foldr|9 months ago

Actually no. Chomsky has never really given a stuff about Chinese Room style arguments about whether computers can “really” understand language. His problem with LLMs (if they are presented as a contribution to linguistic science) is primarily that they don’t advance our understanding of the human capacity for language. The main reasons for this are that (i) they are able to learn languages that are very much unlike human languages and (ii) they require vastly more linguistic data than human children have access to.

dinfinity|9 months ago

> But what exactly is understanding?

I would say that it is to what extent your mental model of a certain system is able to make accurate predictions of that system's behavior.

smokel|9 months ago

Understanding is probably not much more than making abstractions into simpler terms until you are left with something one can relate to by intuition or social consensus.

msh|9 months ago

He should just surrender and give chatgpt whatever land it wants.

bigyabai|9 months ago

Manufactured intelligence to modulate a world of manufactured consent!

I agree with the rest of these comments though, listening to Chomsky wax about the topic-du-jour is a bit like trying to take lecture notes from the Swedish Chef.

godelski|9 months ago

I think many people are missing the core of what Chomsky is saying. It is often easy to miscommunicate and I think this is primarily what is happening. I think the analogy he gives here really helps emphasize what he's trying to say.

If you're only going to read one part, I think it is this:

  | I mentioned insect navigation, which is an astonishing achievement. Insect scientists have made much progress in studying how it is achieved, though the neurophysiology, a very difficult matter, remains elusive, along with evolution of the systems. The same is true of the amazing feats of birds and sea turtles that travel thousands of miles and unerringly return to the place of origin.

  | Suppose Tom Jones, a proponent of engineering AI, comes along and says: “Your work has all been refuted. The problem is solved. Commercial airline pilots achieve the same or even better results all the time.”

  | If even bothering to respond, we’d laugh.

  | Take the case of the seafaring exploits of Polynesians, still alive among Indigenous tribes, using stars, wind, currents to land their canoes at a designated spot hundreds of miles away. This too has been the topic of much research to find out how they do it. Tom Jones has the answer: “Stop wasting your time; naval vessels do it all the time.”

  | Same response.
It is easy to look at metrics of performance and call things solved. But there's much more depth to these problems than our abilities to solve some task. It's not about just the ability to do something, the how matters. It isn't important that we are able to do better at navigating than birds or insects. Our achievements say nothing about what they do.

This would be like saying we developed a good algorithm only my looking at it's ability to do some task. Certainly that is an important part, and even a core reason for why we program in the first place! But its performance tells us little to nothing about its implementation. The implementation still matters! Are we making good uses of our resources? Certainly we want to be efficient, in an effort to drive down costs. Are there flaws or errors that we didn't catch in our measurements? Those things come at huge costs and fundamentally limit our programs in the first place. The task performance tells us nothing about the vulnerability to hackers nor what their exploits will cost our business.

That's what he's talking about.

Just because you can do something well doesn't mean you have a good understanding. It's natural to think the two relate because understanding improves performance that that's primarily how we drive our education. But this is not a necessary condition and we have a long history demonstrating that. I'm quite surprised this concept is so contentious among programmers. We've seen the follies of using test driven development. Fundamentally, that is the same. There's more depth than what we can measure here and we should not be quick to presume that good performance is the same as understanding[0,1]. We KNOW this isn't true[2].

I agree with Chomsky, it is laughable. It is laughable to think that the man in The Chinese Room[3] must understand Chinese. 40 years in, on a conversation hundreds of years old. Surely we know you can get a good grade on a test without actually knowing the material. Hell, there's a trivial case of just having the answer sheet.

[0] https://www.reddit.com/r/singularity/comments/1dhlvzh/geoffr...

[1] https://www.youtube.com/watch?v=Yf1o0TQzry8&t=449s

[2] https://www.youtube.com/watch?v=hV41QEKiMlM

[3] https://en.wikipedia.org/wiki/Chinese_room

paulsutter|9 months ago

"Expert in (now-)ancient arts draws strange conclusion using questionable logic" is the most generous description I can muster.

Quoting Chomsky:

> These considerations bring up a minor problem with the current LLM enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at once. But there are much more serious problems than absurdity.

> One is that the LLM systems are designed in such a way that they cannot tell us anything about language, learning, or other aspects of cognition, a matter of principle, irremediable... The reason is elementary: The systems work just as well with impossible languages that infants cannot acquire as with those they acquire quickly and virtually reflexively.

Response from o3:

LLMs do surface real linguistic structure:

• Hidden syntax: Attention heads in GPT-style models line up with dependency trees and phrase boundaries—even though no parser labels were ever provided. Researchers have used these heads to recover grammars for dozens of languages.

• Typology signals: In multilingual models, languages that share word-order or morphology cluster together in embedding space, letting linguists spot family relationships and outliers automatically.

• Limits shown by contrast tests: When you feed them “impossible” languages (e.g., mirror-order or random-agreement versions of English), perplexity explodes and structure heads disappear—evidence that the models do encode natural-language constraints.

• Psycholinguistic fit: The probability spikes LLMs assign to next-words predict human reading-time slow-downs (garden-paths, agreement attraction, etc.) almost as well as classic hand-built models.

These empirical hooks are already informing syntax, acquisition, and typology research—hardly “nothing to say about language.”

foobarqux|9 months ago

> LLMs do surface real linguistic structure...

It's completely irrelevant because the point he's making is that LLMs operate differently from human languages as evidenced by the fact that they can learn language structures that humans cannot learn. Put another way, I'm sure you can point out an infinitude of similarities between human language faculty and LLMs but it's the critical differences that make LLMs not useful models of human language ability.

> When you feed them “impossible” languages (e.g., mirror-order or random-agreement versions of English), perplexity explodes and structure heads disappear—evidence that the models do encode natural-language constraints.

This is confused. You can pre-train an LLM on English or an impossible language and they do equally well. On the other hand humans can't do that, ergo LLMs aren't useful models of human language because they lack this critical distinctive feature.

mattw1|9 months ago

[deleted]

netcan|9 months ago

Insect behaviour. Flight of birds. Turtle navigation. A footballer crossing the field to intercept a football.

This is what Chomsky always wanted ai to be... especially language ai. Clever solutions to complex problems. Simple once you know how they work. Elegant.

I sympathize. I'm a curious human. We like elegant, simple revelations that reveal how out complex world is really simple once you know it's secrets. This aesthetic has also been productive.

And yet... maybe some things are complicated. Maybe LLMs do teach us something about language... that language is complicated.

So sure. You can certainly critique "ai blogosphere" for exuberance and big speculative claims. That part is true. Otoh... linguistics is one of the areas that ai based research may turn up some new insights.

Overall... what wins is what is most productive.

klabb3|9 months ago

> Maybe LLMs do teach us something about language... that language is complicated.

It certainly teaches us many things. But an LLM trained on as many words (or generally speaking an AI trained on sounds) in similar quantities of a toddler learning to understand, parse and apply language, would not perform well with current architectures. They need orders of magnitude more training material to get even close. Basically, current AI learns slowly, but of course it’s much faster in wall clock time because it’s all computer.

What I mean is: what makes an ALU (CPU) better than a human at arithmetic? It’s just faster and makes fewer errors. Similarly, what makes Google or Wikipedia better than an educated person? It’s just storing and helping you access stored information, it’s not magic (anymore). You can manually do everything mechanically, if you’re willing to waste the time to prove a point.

An LLM does many things better than humans, but we forget they’ve been trained on all written history and have hundreds of billions of parameters. If you compare what an LLM can do with the same amount of training to a human, the human is much better even at picking up patterns – current AIs strongest skill. The magic comes from the unseen vast amounts of training data. This is obvious when using them – stray just slightly outside of the training zone to unfamiliar domains and ”ability” drops rapidly. The hard part is figuring out these fuzzy boundaries. How far does interpolating training data get you? What are the highest level patterns are encoded in the training data? And most importantly, to what extent do those patterns apply to novel domains?

Alternatively, you can use LLMs as a proxy for understanding the relationship between domains, instead of letting humans label them and decide the taxonomy. One such example is the relationship between detecting patterns and generating text and images – it turns out to be more or less reversible through the same architecture. More such remarkable similarities and anti-similarities are certainly on the horizon. For instance, my gut feeling says that small talk is closer to driving a car but very different from puzzle solving. We don’t really have a (good) taxonomy over human- or animal brain processes.

newAccount2025|9 months ago

[flagged]

Smaug123|9 months ago

From some Googling and use of Claude (and from summaries of the suggestively titled "Impossible Languages" by Moro linked from https://en.wikipedia.org/wiki/Universal_grammar ), it looks like he's referring to languages which violate the laws which constrain the languages humans are innately capable of learning. But it's very unclear why "machine M is capable of learning more complex languages than humans" implies anything about the linguistic competence or the intelligence of machine M.

AIorNot|9 months ago

As much as I think of Chomsky - his linguistics approach is outside looking in, ie observational speculation compared to the last few years of LLM based tokenization semantic spaces, embedding, deep learning and mechanistic interpretation, ie:

Understanding Linguistics before LLMs:

“We think Birds fly by flapping their wings”

Understanding Linguistics Theories after LLMs:

“Understanding the physics of Aerofoils and Bernoulli’s principle mean we can replicate what birds do”

dragochat|9 months ago

...for the lulz try asking ChatGPT "what is Chomsky (still) good for?"

thasso|9 months ago

> The world’s preeminent linguist Noam Chomsky, and one of the most esteemed public intellectuals of all time, whose intellectual stature has been compared to that of Galileo, Newton, and Descartes, tackles these nagging questions in the interview that follows.

By whom?

hatsunearu|9 months ago

That is unbelievable that someone could glaze someone this hard

bigyabai|9 months ago

People who particularly agreed with Chomsky's inherently politicized beliefs, presumably.

mattw1|9 months ago

[deleted]

mattw1|9 months ago

In all seriousness tho, not much of anything he says is taken seriously in an academic sense any more. Univeral Grammar, Minimalism, etc. He's a very petty dude. The reason he doesn't engage with GPT is because it suggests that linguistic learning is unlike a theory he spent his whole life [unsuccessfully] promoting, but he's such a haughty know-it-all, that I guess dummies take that for intelligence? It strikes me as not dissimilar to Trump in a way, where arrogance is conflated with strength, intelligence, etc. Fake it til you make it, or like, forever, I guess.

110mAh|9 months ago

[deleted]

glimshe|9 months ago

[deleted]

jsheard|9 months ago

> Please summarize the linked text

Please don't post HN comments that are just giant walls of LLM copypasta.

HardCodedBias|9 months ago

[deleted]

makeitshine|9 months ago

What does age have to do with understanding any of this? He has been developing new, and refining old theories, over decades. It's ridiculous to expect someone to stop purely because of age, or to think they need your protection from discussing their views.

ClayShentrup|9 months ago

[deleted]

jopicornell|9 months ago

I care what one of the most famous philosophers and thinkers of our times says. He's not the most up to date, but calling him an idiot positions you politically and intellectually.

guappa|9 months ago

I agree, why did you decide to comment anyway then?

0xDEAFBEAD|9 months ago

I confess my opinion of Noam Chomsky dropped a lot from reading this interview. The way he set up a "Tom Jones" strawman and kept dismissing positions using language like "we'd laugh", "total absurdity", etc. was really disappointing. I always assumed that academics were only like that on reddit, and in real life they actually made a serious effort at rigorous argument, avoiding logical fallacies and the like. Yet here is Chomsky addressing a lay audience that has no linguistics background, and instead of even attempting to summarize the arguments for his position, he simply asserts that opposing views are risible with little supporting argument. I expected much more from a big-name scholar.

"The first principle is that you must not fool yourself, and you are the easiest person to fool."

protocolture|9 months ago

Havent read the interview, but interviews arent formal debates and I would never expect someone to hold themselves to that same standard.

The same way that reddit comments arent a formal debate.

Mocking is absolutely useful. Sometimes you debate someone like graham hancock and force him to confirm that he has no evidence for his hypotheses, then when you discuss the debate, you mock him relentlessly for having no evidence for his hypotheses.

> Yet here is Chomsky addressing a lay audience that has no linguistics background

So not a formal debate or paper where I would expect anyone to hold to debate principles.

foobarqux|9 months ago

"Tom Jones" isn't a strawman, Chomsky is addressing an actual argument in a published paper from Steven Piantadosi. He's using a pseudonym to be polite and not call him out by name.

> instead of even attempting to summarize the arguments for his position..

He makes a very clear, simple argument, accessible to any layperson who can read. If you are studying insects what you are interested in is how insects do it not what other mechanisms you can come up with to "beat" insects. This isn't complicated.

hackinthebochs|9 months ago

There's a reason Max Planck said science advances one funeral at a time. Researches spend their lives developing and promoting the ideas they cut their teeth on (or in this case developed himself) and their view of what is possible becomes ossified around these foundational beliefs. Expecting him to be flexible enough in his advanced age to view LLMs with a fresh perspective, rather than strongly informed by his core theoretical views is expecting too much.

lanfeust6|9 months ago

I'm noticing that leftists overwhelmingly toe the same line on AI skepticism, which suggests to me an ideological motivation.

thomassmith65|9 months ago

Chomsky's problem here has nothing to do with his politics, but unfortunately a lot to do with his long-held position in the Nature/Nurture debate - a position that is undermined by the ability of LLMs to learn language without hardcoded grammatical rules:

  Chomsky introduced his theory of language acquisition, according to which children have an inborn quality of being biologically encoded with a universal grammar
https://psychologywriting.com/skinner-and-chomsky-on-nature-...

Supermancho|9 months ago

> AI skepticism

Isn't AI optimism an ideological motivation? It's a spectrum, not a mental model.

numpad0|9 months ago

Leftists and intellectuals overlap a lot. LLM text must be still full of six fingered hands to many of them.

For Chomsky specifically, the entire existence of LLM, however it's framed, is a massive middle finger to him and a strike-through on a large part of his academic career. As much as I find his UG theory and its supporters irritating, it might be felt a bit unfair to someone his age.

protocolture|9 months ago

99%+ of humans on this planet do not investigate an issue, they simply accept a trusted opinion of an issue as fact. If you think this is a left only issue you havent been paying attention.

Usually what happens is the information bubble bursts, and gets corrected, or it just fades out.

rxtexit|9 months ago

Then you obviously didn't listen to a word Chomsky has said on the subject.

I was quite dismissive of him on LLMs until I realized the utter hubris and stupidity of dismissing Chomsky on language.

I think it was someone asking if he was familiar with the Wittgenstein Blue and Brown books and of course because he as already an assistant professor at MIT when they came out.

I still chuckle at my own intellectual arrogance and stupidity when thinking about how I was dismissive of Chomsky on language. I barely know anything and I was being dismissive of one of unquestionable titans and historic figures of a field.

internet_points|9 months ago

This is a great way to remove any nuance and chance of learning from a conversation. Please don't succumb to black-and-white (or red-and-blue) thinking, it's harmful to your brain.

santoshalper|9 months ago

Or an ideological alignment of values. Generative AI is strongly associated with large corporations that are untrusted (to put it generously) by those on the left.

An equivalent observation might be that the only people who seem really, really excited about current AI products are grifters who want to make money selling it. Which looks a lot like Blockchain to many.

EasyMark|9 months ago

I think viewing the world as either leftist or right wing is rather limiting philosophy and way to go through life. Most people are a lot more complicated than that.

mattw1|9 months ago

I have experienced this too. It's definitely part of the religion but I'm not sure why tbh. Maybe they equate it with like tech is bad mkay, which, looking at who leads a lot of the tech companies, is somewhat understandable, altho very myopic.

A4ET8a8uTh0_v2|9 months ago

It is unfortunate opinion, because I personally hold Chomsky in fairly high regard and give most of his thoughts I am familiar with a reasonable amount of consideration if only because he could, I suppose in the olden days now, articulate his points well and make you question your own thought process. This no longer seems to be the case though as I found the linked article somewhat difficult to follow. I suppose age can get to anyone.

Not that I am an LLM zealot. Frankly, some of the clear trajectory it puts humans on makes me question our futures in this timeline. But even if I am not a zealot, but merely an amused, but bored middle class rube, the serious issues with it ( privacy, detailed personal profiling that surpasses existing systems, energy use, and actual power of those who wield it ), I can see it being implemented everywhere with a mix of glee and annoyance.

I know for a fact it will break things and break things hard and it will be people, who know how things actually work that will need to fix those.

I will be very honest though. I think Chomsky is stuck in his internal model of the world and unable to shake it off. Even his arguments fall flat, because they don't fit the domain well. It seems like they should given that he practically made his name on syntax theory ( which suggests his thoughts should translate well into it ) and yet.. they don't.

I have a minor pet theory on this, but I am still working on putting it into some coherent words.

petermcneeley|9 months ago

I recently saw a new LLM that was fooled by "20 pounds of bricks vs 20 feathers". These are not reasoning machines.

dghlsakjg|9 months ago

I recently had a computer tell me that 0.1 + 0.2 != 0.3. It must not be a math capable machine.

Perhaps it is more important to know the limitations of tools rather than dismiss their utility entirely due to the existence of limitations.

StrandedKitty|9 months ago

Surely it just reasoned that you made a typo and "autocorrected" your riddle. Isn't this what a human would do? Though to be fair, a human would ask you again to make sure they heard you correctly. But it would be kind of annoying if you had to verify every typo when using an LLM.

HDThoreaun|9 months ago

Tons of people fall for this too. Are they not reasoning? LLMs can also be bad reasoning machines.

downboots|9 months ago

But are you aware of the weight comparison of a gallon of water vs a gallon of butane ?

fzzzy|9 months ago

20 feathers?

mrandish|9 months ago

[Edit to remove: It was not clear that this was someone else's intro re-posted on Chomsky's site]

kweingar|9 months ago

This is an interview published in Common Dreams, rehosted at Chomsky's site. Those are the interviewer's words, not Chomsky's.

kevinventullo|9 months ago

Maybe I am missing context, but it seems like he’s defending himself from the claim that we shouldn’t bother studying language acquisition and comprehension in humans because of LLM’s?

Who would make such a claim? LLM’s are of course incredible, but it seems obvious that their mechanism is quite different than the human brain.

I think the best you can say is that one could motivate lines of inquiry in human understanding, especially because we can essentially do brain surgery on an LLM in action in a way that we can’t with humans.

johnfn|9 months ago

> It’s as if a biologist were to say: “I have a great new theory of organisms. It lists many that exist and many that can’t possibly exist, and I can tell you nothing about the distinction.”

> Again, we’d laugh. Or should.

Should we? This reminds me acutely of imaginary numbers. They are a great theory of numbers that can list many numbers that do 'exist' and many that can't possibly 'exist'. And we did laugh when imaginary numbers were first introduced - the name itself was intended as a derogatory term for the concept. But who's laughing now?

chongli|9 months ago

Imaginary numbers are not relevant at all. There’s nothing whatsoever to do with the everyday use of the word imaginary. They could just as easily have been called “vertical numbers” and real numbers called “horizontal numbers” in order to more clearly illustrate their geometric interpretation in the complex plane.

The term “imaginary number” was coined by Rene Descartes as a derogatory and the ill intent behind his term has stuck ever since. I suspect his purpose was theological rather than mathematical and we are all the worse for it.

kelsey978126|9 months ago

This is the point where i realized he has no clue what he is saying. Theres so many creatures that once existed that can never again exist on earth due to the changes that the planet has gone through over millions, billions of years. The oxygen rich atmosphere that supported the dinosaurs for instance. If we had some kind of system that can put together proper working DNA for all the creatures that ever actually existed on this planet, some half of them would be completely nonviable if introduced to the ecosystem today. He is failing to see that there is an incredible understanding of systems that we are producing with this work, but he is a very old man from a very different time and contrarianism is often the only way to look smart or reasoned when you have no clue whats actually going on, so I am not shocked by his take.

bubblyworld|9 months ago

In the case of complex numbers mathematicians understand the distinction extremely well, so I'm not sure it's a perfect analogy.

irrational|9 months ago

I have a degree in linguistics. We were taught Chomsky’s theories of linguistics, but also taught that they were not true. (I don’t want to say what university it was since this was 25 years ago and for all I know that linguistics department no longer teaches against Chomsky). The end result is I don’t take anything Chomsky says seriously. So, it is difficult for me to engage with Chomsky’s ideas.

windexh8er|9 months ago

I'm rather confused by this statement. I've read a number of Chomsky pieces and have listened to him speak a number of times. To say his theories were all "not true" seems, to an extent, almost impossible.

Care to expand on how his theories can be taught in such a binary way?

ggm|9 months ago

This reminds me of the debates over F.R. Leavis, and the impact it had on modern english teaching worldwide. There are a small dying cohort of english professors who are refugees from internecine warfare.

Same thing happened in Astronomy. Students of Fred Hoyle can't work in some institutions. &c &c.

Calavar|9 months ago

I don't have a degree in linguistics, but I took a few classes about 15 years ago, and Chomsky's works were basically treated as gospel. Although my university's linguistics faculty included several of his former graduate students, so maybe there's a bias factor. In any case, it reminds me of an SMBC comic about how math and science advance over time [1]

[1] https://smbc-wiki.com/index.php/How-math-works

next_xibalba|9 months ago

Chomsky is always saying that LLMs and such can only imitate, not understand language. But I wonder if there is a degree of sophistication at which he would concede these machines exceed "imitation". If his point is that LLMs arrive at language in a way different than humans... great. But I'm not sure how he can argue that some kind of extremely sophisticated understanding of natural language is not embedded in these models in a way that, at this point, exceeds the average human. In all fairness, this was written in 2023, but given his longstanding stubbornness on this topic, I doubt it would make a difference.

mattnewton|9 months ago

I think what would "convince" Chomsky is more akin to the explainability research currently in it's infancy, producing something akin to a branch of information theory for language and thought.

Chomsky talks about how the current approach can't tell you about what humans are doing, only approximate it; the example he has given in the past is taking thousands of hours of footage of falling leaves and then training a model to make new leaf falling footage versus producing a model of gravity, gas mechanics for the air currents, and air resistance model of leaves. The later representation is distilled down into something that tells you about what is happening at the end of some scientific inquiry, and the former is a opaque simulation for engineering purposes if all you wanted was more leaf falling footage.

So I interpret Chomsky as meaning "Look, these things can be great for an engineering purpose but I am unsatisfied in them for scientific research because they do not explain language to me" and mostly pushing back against people implying that the field he dedicated much of his life to is obsolete because it isn't being used for engineering new systems anymore, which was never his goal.

flornt|9 months ago

I guess it's because LLM does not understand the meaning as you understand what you read or thought. LLMs are machines that modulate hierarchical positions, ordering the placement of a-signifying sign without a clue of the meaning of what they ordered (that's why machine can hallucinate :they don't have a sense of what they express)

icedrift|9 months ago

From what I've read/watched of Chomsky he's holding out for something that truly cannot be distinguished from human no matter how hard you tried.

ramoz|9 months ago

It's always good to humble the ivory tower.