Changed My Mind After Reading Larson's "The Myth of Artificial Intelligence"
10 points| abss | 2 years ago
Larson argues convincingly that current AI (i included LLMs because are still induction and statistics based), despite their impressive capabilities, represent a kind of technological dead end in our quest for AGI. The notion of achieving a true AGI, a system with human-like understanding and reasoning capabilities, seems more elusive than ever. The current trajectory of AI development, heavily reliant on data and computational power, doesn't necessarily lead us towards AGI. Instead, we might be merely crafting sophisticated tools, akin to cognitive prosthetics, that augment but do not replicate human intelligence.
The book emphasizes the need for radically new ideas and directions if we are to make any significant progress toward AGI. The concept of a technological singularity, where AI surpasses human intelligence, appears more like a distant mirage rather than an approaching reality.
Erik J. Larson's book compellingly highlights the deficiencies of deduction and induction as methods of inference in artificial intelligence. It also underscores the lack of a solid theoretical foundation for abduction, suggesting that current AI, including large language models, faces significant limitations in replicating complex human reasoning.
I'm curious to hear your thoughts on this. Do you think our current approach to AI, especially with LLMs, is fundamentally limited? Is the idea of AGI as we conceive it now just a myth?
SirensOfTitan|2 years ago
As a result of this, you can read about AGI and think authors are debating whether a system is an AGI or a proto-AGI but they’re actually debating where the line is drawn.
Taking a page from Philosophy in the Flesh, I think that human reasoning and cognition are intrinsically related to the body—like even metaphor is inherently body and environment related. Have we really considered what the human mind would act like completely disembodied? Is language on its own really the right context for AGI to be born in?
lee-rhapsody|2 years ago
This is a really interesting point!
aristofun|2 years ago
sk11001|2 years ago
The things the book talks about are not the same things that exist today.
coolvision|2 years ago
billywhizz|2 years ago
"the problems AI currently solves can only be cracked if very large repositories of data are available to solve them. ChatGPT is no exception—it makes the point. In fact, it's a continuation of previous innovations of Big Data AI taken to an extreme. The AI scientist's dream of general intelligence, often referred to as Artificial General Intelligence (AGI), remains as elusive as ever."
https://erikjlarson.substack.com/p/ai-is-broken
jryan49|2 years ago
gardenhedge|2 years ago
quickthrower2|2 years ago
Therefore it is limited. It might do AGI in a universe where we could harness unlimited energy and create unlimited matmul compute. But if it needs to run on 3 square meals a day it doesn’t have much hope!
kypro|2 years ago
The idea that intelligence isn't just statistics to me is the far more radical position here. If it's not statistical modelling then what is it? Intelligence is not magic. Any prediction requires some amount of probabilistic modelling.
That said, I think there is probably a significant amount of meta modelling required to achieve true AGI and it seems unlikely that current architectures can achieve this. The fact that LLMs don't seem to have inner thought or the fact that learning and inference is separate is a huge limitation of current algorithms.
It seems to me when you think about what us humans do the ability to meta analyse and pipe our thoughts through various processes in our head, then discard inaccuracies and adapt to new information is important. LLMs seem rigid in their thinking because they don't do this meta reasoning, and are completely unable to adapt to new information. Current LLM act kinda like humans do during exams. In an exam we feel we must provide an answer to every question and if we don't know the answer we'll just make something up. But outside of exams humans don't do this. If we don't know something we'll gather information, we'll test theories, we'll ask questions, then we'll adapt and take onboard new information.
LLMs don't do this and therefore feel rigid in their thinking and often act irrationally. For example, you can convince an LLM of something with some faulty information or logic then start a new chat and it's reverted back to whatever position it was giving before. They only really get better at reasoning when us humans walk them through meta reasoning processes, but it cannot do this itself and even when we help them with this they do not adapt in light of it.
What I will say is that we have clearly made one critical discovery towards AGI (and super intelligence) and that is that scale is important. We also seem to be making process on the algorithmic side too and are certainly edging closer to something that with enough scale could approach something that looks like AGI.
I'll also add what LLMs are able to do in their infancy is frankly incredible and it's not hard to imagine that it would only take a few additional algorithmic breakthroughs from here to get to something very close to AGI, if not achieve it. My guess is that we already have the scale and much of the base neural network architecture required. The main limitation in my opinion is the that training and inference are separate steps – not that LLMs use statistics.
Finally, no one serious that I'm aware of believes that simply scaling current iterations of LLMs will achieve AGI anyway. Dismissing the possibility of AGI because current LLM architecture is missing a few things seems both silly and an uncharitable position to me. The important disagreement here is just how many more algorithmic breakthroughs we need to achieve AGI. We all know GPT-4 reasoning ability is based too wholly in low-level statistical reasoning, and is incapable of the higher-level meta reasoning required for more advantaged intelligence. The real question is how far are we from making process on this.
Just my opinions anyway.
haltist|2 years ago
The radical architecture required to achieve AGI is to treat it as a religion and build artifacts, rituals, and practices that will manifest the true technological god and govern the world with nothing more than mathematics implemented on GPUs.
abss|2 years ago
OhNoNotAgain_99|2 years ago
[deleted]