That very much depends on which AGI definition you are using. I imagine there are a dozen or so variants out there. See also "AI" and "agents" and (apparently) "vibe coding" and pretty much every other piece of jargon in this field.
I think it's very widely accepted definition and there's really no competing definitions either as far as I know. While some people might think AGI means superintelligence, it's only because they've heard the term but never bothered to look up
what it means.
The Hanoi Towers example demonstrates that SOTA RLMs struggle with tasks a pre-schooler solves.
The implication here is that they excel at things that occur very often and are bad at novelty. This is good for individuals (by using RLMs I can quickly learn about many other aspects of human body of knowledge in a way impossible/inefficient with traditional methods) but they are bad at innovation. Which, honestly, is not necessarily bad: we can offload lower-level tasks[0] to RLMs and pursue innovation as humans.
[0] Usual caveats apply: with time, the population of people actually good at these low-level tasks will diminish, just as we have very few Assembler programmers for Intel/AMD processors.
The argument of (1) doesn't really have anything to do with humans or antromorphising. We're not even discussing AGI, we're just talking about the property of "thinking".
If somebody claims "computers can't do X, hence they can't think".
A valid counter argument is "humans can't do X either, but they can think."
It's not important for the rebuttal that we used humans. Just that there exists entities that don't have property X, but are able to think. This shows X is not required for our definition of "thinking".
Why AGI need to be even as good as average human. If you get someone with 80 IQ is still smart enough to reason and do plenty of menial tasks. Also not sure why AGI need to be as good in every task? Average human will excel others at few tasks and sux terribly in many others.
Because that’s how AGI is defined. https://en.wikipedia.org/wiki/Artificial_general_intelligenc...: “Artificial general intelligence (AGI)—sometimes called human‑level intelligence AI—is a type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks”
But yes, you’re right that software needs not be AGI to be useful. Artificial narrow intelligence or weak AI (https://en.wikipedia.org/wiki/Weak_artificial_intelligence) can be extremely useful, even something as narrow as a services that transcribes speech and can’t do anything else.
AGI should perform on the level of an experienced professional in every task. The average human is useless for pretty much everything but capable of learning to perform almost any task, given enough motivation and effort.
Or perhaps AGI should be able to reach the level of an experienced professional in any task. Maybe a single system can't be good at everything, if there are inherent trade-offs in learning to perform different tasks well.
For comparison, the average person can't print Hello World in python. Your average programmer (probably) can.
It's surprisingly simple to be above average in most tasks. Which people often confuse with having expertise. It's probably pretty easy to get into the 80th percentile of most subjects. That won't make you the 80th percentile of people that do the thing, but most people don't. I'd wager 80th percentile is still amateur.
the average human is good at something, and sucks at almost everything. Human performance at chess and average performance at chess differ by 7 orders of magnitude.
Most people are. One of my pet peeves is that people falsely equate AGI with ASI, constantly. We have had full AGI for years now. It is a powerful tool, but not what people tend to think of as god-like “AGI.”
simonw|8 months ago
FINDarkside|8 months ago
dvfjsdhgfv|8 months ago
The implication here is that they excel at things that occur very often and are bad at novelty. This is good for individuals (by using RLMs I can quickly learn about many other aspects of human body of knowledge in a way impossible/inefficient with traditional methods) but they are bad at innovation. Which, honestly, is not necessarily bad: we can offload lower-level tasks[0] to RLMs and pursue innovation as humans.
[0] Usual caveats apply: with time, the population of people actually good at these low-level tasks will diminish, just as we have very few Assembler programmers for Intel/AMD processors.
TeMPOraL|8 months ago
Find me one that can solve it entirely in their head without touching the actual thing and externalizing state.
unknown|8 months ago
[deleted]
math_dandy|8 months ago
thomasahle|8 months ago
If somebody claims "computers can't do X, hence they can't think". A valid counter argument is "humans can't do X either, but they can think."
It's not important for the rebuttal that we used humans. Just that there exists entities that don't have property X, but are able to think. This shows X is not required for our definition of "thinking".
bastawhiz|8 months ago
Since there's not really a whole lot of unique examples of general intelligence out there, humans become a pretty straightforward way to compare.
pzo|8 months ago
Someone|8 months ago
But yes, you’re right that software needs not be AGI to be useful. Artificial narrow intelligence or weak AI (https://en.wikipedia.org/wiki/Weak_artificial_intelligence) can be extremely useful, even something as narrow as a services that transcribes speech and can’t do anything else.
jltsiren|8 months ago
Or perhaps AGI should be able to reach the level of an experienced professional in any task. Maybe a single system can't be good at everything, if there are inherent trade-offs in learning to perform different tasks well.
godelski|8 months ago
It's surprisingly simple to be above average in most tasks. Which people often confuse with having expertise. It's probably pretty easy to get into the 80th percentile of most subjects. That won't make you the 80th percentile of people that do the thing, but most people don't. I'd wager 80th percentile is still amateur.
MoonGhost|8 months ago
But only the limited number of tasks per human.
> Or perhaps AGI should be able to reach the level of an experienced professional in any task.
Even if it performs just better than untrained human but on any task this will be superhuman level. As no human can do it.
mathgradthrow|8 months ago
datadrivenangel|8 months ago
usef-|8 months ago
adastra22|8 months ago
gylterud|8 months ago
aaron695|8 months ago
[deleted]