(no title)
AbrahamParangi | 4 months ago
Meanwhile, the technology continues to progress. The level of psychological self-defense is unironically more interesting than what he has to say.
Quite a wide variety of people find AI deeply ego threatening to the point of being brainwormed into spouting absolute nonsense, but why?
ACCount37|4 months ago
When AI beat humans at chess, it didn't result in humans revising their idea of the capabilities of machine intelligence upwards. It resulted in humans revising their notion of how much intelligence is required to play chess at world champion level downwards, and by a lot.
Clearly, there's some sort of psychological defense mechanism in play. First, we see "AI could never do X". Then an AI does X, and the sentiment flips to "X has never required any intelligence in the first place".
goalieca|4 months ago
tim333|4 months ago
>Norvig is clearly very interested in seeing what Hinton could come up with. But even Norvig didn’t see how you could build a machine that could understand stories using deep learning alone. https://www.newyorker.com/news/news-desk/is-deep-learning-a-...
sailingparrot|4 months ago
He is not brainwashed, this just happens to be his business. What happens to Gary Marcus if Gary Marcus stops talking about how LLM are worthless? He just disappears. No one ever interviews him for his general thoughts on ML, or to discuss his (nonexistent) research. His only clame to fame is being the loudest contrarian person in the LLM world so he has to keep doing that or accept to become irrelevant.
Slight tangent but this is a recurring pattern in fringe belief, e.g. prominent flat earther who long ago accepted earth is not flat but can’t stop the act as all their friends and incomes are tied to that belief.
Not to say that believing LLM won’t lead to AGI is fringe, but it does show the danger (and benefits I guess) to tying your entire identity to a specific belief.
unknown|4 months ago
[deleted]
ModernMech|4 months ago
It makes sense when you look at this as a wider conversation. Every time Sam Altman, Elon Musk and co. predict that AGI is just around the corner, and their products will be smarter than all of humanity combined, and they are like having an expert in everything in your pocket; people like Gary Marcus are going to respond in just as extreme way in the opposite direction. Maybe if the AI billionaires with the planet-wide megaphones weren't so bombastic about their claims, certain other people wouldn't be so bombastic in their pushback.
brazukadev|4 months ago
And at the same time, his predictions are becoming more and more real
lairv|4 months ago
Gary Marcus said that Deep Learning was hitting a wall 1 month before the release of DALLE 2, 6 months before the release of ChatGPT and 1 year before GPT4, arguably 3 of the biggest milestones in Deep Learning