I wish authors like these would put a bit more effort and research into these posts. There's a big gap between something being clickbait and something being worthy of the title heading they dared to slap on it. This post probably harmed AGI research more than it helped.
The author shows profound naivety by listing these in supposed "progressive difficulty" order, without evidence of such, all the while proclaiming that such a list is more important than the AGI itself. And I'm curious about why this was decided to be published in groups that are known for readers that are AGI-aware but AGI-laymen nonetheless. It's wonderful that more people are reading about AGI in 2022. Great stuff. But please don't waste those gains on this drivel.
The tasks under "physical intelligence" are an indication of how bad off that part of AI is. "Physical robot that can survive for a week in an urban environment" is more than an initial goal. Although, arguably, a Waymo or Cruise Automation self driving car could do it now if provided with an automated charging station.
I'd suggest, as near term goals:
- A robot that can pick and pack at least 90% of what Amazon sells without needing human intervention more than once a day. (Get acquired by Amazon at a 9-figure valuation.)
- A robot that can clean a store or office building's floors or carpets without needing human intervention more than once a week. (That is, a useful industrial-strength Roomba.)
- A robot that can, by feel, do single-pin lock picking. (Currently, getting a key into a lock is an advanced robotics task.)
- A robot that can restock grocery store shelves.
- Small forklift robots which can cooperate to move larger furniture. (Good way to get into multi-robot coordination in unstructured environments.)
- A small robot with the agility of a squirrel.
More advanced:
- Assemble IKEA furniture.
- Cooperating robots which can do basic house construction tasks, such as installing wallboard or running electrical cable or pipe.
The author writes:
"In the early days of artificial intelligence, the field was defined by a single goal: to build a machine that could think and behave like a human. We call that AGI, or artificial general intelligence, and it’s humanity’s final tech frontier." That's too human-limited. There are stages beyond that, such as running a large coordinated multi-robot operation, or a whole society of robots.
For some of these examples, I suspect that the short term transitional solution will be to simply reshape our spaces to make them co-accessible to humans and robots. For example, a robot could much more easily stock shelves in a fully automated grocery if items came in more standard package sizes (think standard like shipping containers or paper sizes), where guaranteed to have readable barcodes, and were part of a fully automated loading dock to shelf system-of-systems with all standardized pieces.
Could probably handle automated inventory management, expiration dates, recalls, etc as well.
In fact, with enough standardization it would probably be conceivable to go directly from factory to store shelf without a human in the loop.
With enough automation the stores could be made more JIT and take up less space. Keep more things in a warehouse section, have smaller shelves, and restock rapidly throughout the day.
There are really two problems though:
1. we're working too hard to try to get these things to work in a human designed or adapted world. This kind of store would probably be pretty boring for humans to interact with (think less interesting than a Costco)
2. all this automation is way more expensive than a handful of low paid humans across 2 or 3 shifts. Anecdote: I remember when I was younger some highly automated test sites for fast food franchises. They'd completely automated the drink pouring, or cooking and assembly of some menu items. They all disappeared very quickly and were never repeated. The TCO, including downtime loss of business, was crushing to the businesses...think your average McDonald's soft serve machine but the entire business depends on it working perfectly all day, every day of the week.
But if this is solved, or acceptable, a good test target product set would likely be cereal, soda, or canned goods. It makes up the bulk of the store interior. Could probably be extended to the bakery pretty quick. This would leave harder to handle products like meats, produce, and so on to human hands for a while, but those could probably eventually be overcome with enough millions in R&D or behavior changes in the public.
The already placed checkmarks are wrong. An AGI by this list would be able to beat humans at chess and win an art competition, both of which are already checked off. But the chess playing AI and the art competition AI are completely separate systems, the art AI can't win a game at chess, it doesn't know a single thing about chess and it's never going to and vice versa. By this checkmark logic you could complete the whole list with one specialized neural net for each task and, while that would be an absolutely quantum leap in technology and the ramifications for the world, it still wouldn't be AGI. I know he discusses this at the beginning to say that computers can win at chess without being general but then the idea of generalness is basically just left there. Items should only be checked off when one system is capable of completing both tasks.
Just curious, is it absolutely necessary that a single model can solve all these problems to satisfy you? As long as it’s a finite and relatively small list, why not allow for N models boxed together wrapped in a switch statement? Or if you’re picky, a top-level language model which tries to decide which of its subsystems to employ for the problem at hand. After all, the human brain is at least somewhat partitioned (although not at the granularity of chess vs hotdog identification).
That was an amazing read, thank you. Clearly lays the proposition that the first (if only) problem to solve is philosophical. Which brings us to the sorry state of academic philosophy today. A bunch of people who for the most part don’t know math, or a solid understanding of reality (including quantum mechanics). A group among which “anti-scientism” is a more and more popular topic for some reason. A group, which from whenever I ask this question, they agree there’s no moral standing for eating meat but almost none of them are vegans. Only slightly less forgivable than cardiologist who die of heart attacks I suppose.
Whatever happens one can bet this AGI breakthrough if it occurs will happen quite far from this group of people.
Reading this list makes me think if an AI can just make an income without being told explicitly how to do it, is it an AGI? This metric seems reductive on the surface, but takes quite a bit of intelligence to understand how society works and how to provide value.
> AI creates a (crypto)currency that has USD $1M+ market capitalization (at 2022 adjusted value) for more than 6 months
Is this a sign that the AI has succeeded at becoming intelligent or failed? I don’t think any of the cryptocurrency milestones are plausible signs of progress. Surely GPT could have generated some preposterous “white paper” circa 2021 and with a little help achieved that.
> In the early days of artificial intelligence, the field was defined by a single goal: to build a machine that could think and behave like a human. (…) We can now build machines that can beat humans at specific tasks like playing chess or go, and we are starting to see machines that can learn to perform multiple tasks.
I'd say, "machine beats human in chess" doesn't mean what it was supposed to mean in Turing's days. Meaning, rather than being a proof of deep consideration, it has moved towards generalizing pattern recognition and library lookups. Rather than proving a point in (ad-hoc) decision making (Turing's "ban"), it's an application of data.
I’ve been starting to wonder, is GPT-3 the beginning of AGI?
I know, I know, it’s just a language model.
But I’ve been thinking. About my thinking. I think in words. I solve problems in words. I communicate results in words. If I had no body, and you could only interact with me via text, would I look that much different than GPT?
Does AGI really need anything more than words? Is it possible that simply adding more parameters to today’s transformer models will yield AGI? It seems increasingly plausible to me.
> But I’ve been thinking. About my thinking. I think in words.
The idea that words and thinking are essentially the same (linguistic determinism) was discarded decades ago. Virtually all linguists today agree that while language influences thought, thought operates far beyond the constraints of language, so a "language model" cannot realistically hope to reproduce the entire gamut of human thinking.
Or perhaps words are the byproduct of the real thing. Consider the moments where your mind just click and solves something. You find it hard to map words to what happened. Or when you judge a situation to be dangerous, you just kind of know it, and then you map your gut feeling into words so you can explain to someone.
Perhaps "intelligence" is the process that enables these leaps between islands of words.
I believe that thinking and idea generation is much more abstract than words. Animals seem to do a lot of idea generation (improvisation) without knowing about words.
But they cannot pass this knowledge efficiently, except from imitation.
If you had no body starting this moment, your mind would still have benefited from years of interacting with and receiving stimuli from the physical world. GPT-3 isn’t so fortunate
I also had this thought. Something I read about semiotics gave me (or spelled out) the idea that the brain communicates to itself using language, however abstract.
Turns out, that might not be the case-i.e. understanding is probably not a linguistic phenomenon.
Sometimes, after I put down the crack pipe, I think that understanding and experiencing are two names for something that's fundamentally the same. When I'm thinking about code or a proof, my brain filters out the spacing of the letters, the smell of the paper, etc-which are things I'd do while I'm reading it for the first time. There's these common tools/filters - intuitions - that are exactly the same in thinking and experiencing.
I don't think GPT3 can understand color, for example. But if we fed it a bunch of RGB transforms and raw data, and it generalized and applied them perfectly, could we say then that it's generally intelligent? IDFK
Imo what's really missing is a kind of consciousness, or something to drive the system. If gpt3 could be adapted to run over some internal state instead of just a chat log, and came with some system that updated it (and that state stayed internally consistent), I'd be more inclined to believe it was closer to general intelligence.
There's one big weakness in all current language models that I feel holds it back. There's no way proactive way to have it be persuasive.
Weak AGI will be the first language model that is able to somehow influence the thoughts of the person communicating with it, I think that is the milestone of AGI. From my experience with GPT-Neo and OPT and using it to help write stories or make chatbots, the responses are still very reactionary. In that sense, adding more parameters helps the model give a more coherent response, but it's still a response.
Babies learn a ton of things way before they understand and use even single words. It takes them years to use sentences but they will have learned a shocking amount by then relative to a newborn.
I don't, really. Especially for things that matter. I think in abstractions, half-formed references, symbols, shapes. Words are cheap knockoffs of those made for mass consumption.
Please see the article itself. One that really resonates with me now is "putting a child to sleep" - that at often is one of the most trying things I do these days.
What often gives a clue that the child has slept is a slight change in muscle tone, or a sigh, perhaps the pace at which the baby sucks its thumb. How can that be translated into words?
It doesn't even matter what's in the article, the fact that this is on the HN front page tells me it's going to happen - though who knows when
For context I've been "into" AGI since I read the term in the late 1990s in a Ray Kurzweil book and decided that I was going to work my whole life to realize it
Much later, Ben Goertzel (arguably the guy to popularize the term) was my Masters Thesis advisor, at the National Intelligence University in 2013 (Literally a secret graduate school program for people in the intelligence community). My thesis was "How will AGI impact national security."
Almost nobody cared then, though I did have a lovely lunch with Yoshua Bengio in 2014 at the Quebec AGI conference. Ben has hosted the AGI conference since 2008 and it has always been sparsely attended.
In fact bringing up AGI was likely to get you laughed out of any gathering of computer scientists - and outside of that it was pure speculative science fiction.
It's tragically sad to me that, inevitably, the early people who have been thinking about and working on and pushing this vision since day one will likely not be the ones who realize it. Such is life
edit: Worth acknowledging that this was the original vision of computers after all - the business people fucked it up
It really doesn't take much for anything to get on the HN front page. And upvotes are often done as a way to bookmark a thread before even knowing the quality of it.
Ben's conferences are low in number because the audience he's targeting is smaller. Peter Thiel co-hosted the Stanford Singularity Summit in 2006 (which is where I happened to meet Ben G in person for the first time). There had to have been at least 1000 people there. In 2006.
It's not about the content, it's how you sell it. But at least we can both agree that the linked article is useless.
I was always impressed by Ben on the old singularitarian and SL4 mailing lists. This was back in the days before deepnets took over and other approaches were still popular (and people still harked back to Eurisko as the state of the art in general intelligence). I feel like his impact has been lower than it should have been, although I suppose the rest of the main characters from around then have disappeared into AI institutes to play various forms of elaborate LARP.
Aye. The business folks picked it up because there is so much value in figuring out good-enough automation. I reckon you've probably seen your share of that per your bio (Kessel Run especially).
All the AI and neural networks I've seen to date are good at mimicking but none have demonstrated creativity. I'm curious, has anyone seen that characteristic?
Stable Diffusion artwork probably comes closest but it feels derivative.
I would be very interested in a set of benchmarks and intermediate goals for safe AGI (i.e. AGI that is aligned with human interests) rather than just AGI itself.
I feel like we are at a point where AGI has to be be defined in a different way. This kind of list isn't enough, in my opinion to help actually delineate weak and strong AGI. At a layman's understanding of technology the way a computer works is as equally "magical" as seeing the output of the stable diffusion model. And creating art is a very clear step into "thinking".
For many people AGI is already here. They have Siri and Alexa, AI art, GPT3 based therapy/chat bot, a chat bot that will help them write a book, and "soon" will drive their car for them. The Google Duplex assistant demo where it booked an appointment made it clear to me that for some people, that's the smartest they need AI to be. Anything more is just extra.
I am really excited about how far we're going to push AI in my lifetime, but I also realized that for many scenarios, weak AGI is enough. People will project their own expectations and essentially help fool themselves. I don't know if testing a model to perform the same as a human matters in some ways.
There's one big skill that I personally value the most when it comes to qualifying hard AI, and that's the ability of it to make me laugh based comedic irony. I wonder what that model would look like.
> we now know that playing chess well does not require AGI; it is a specific task that can be solved with task-specific algorithms.
Hmm. Only if "discrete-time, alternating-games" is the specific-task. Everything within that can use the same algorithms, just with different training data.
Squiggles aren't AI generated. Winning awards in literature means very little (time survival is what means to me. And given two people you'll get different definitions of what meaning is...)
We won't have AGI until we have an AI agent that feels the need to survive (i.e. tries not to "die"). It will be only then, meaning with a selfish sense of survival and the ability to better itself in its various mental models, that we'll have a chance at AGI and possibly a consciousness AGI at that.
What is "feels" and what is "the need"? Does a virus, biochemical or digital, also "feels the need to survive"? And also, why would an AGI care about survival? Perhaps a higher intelligence than ours will contemplate how doomed the planet is, being evaporated by the sun in circa 5 billion years [1], and how doomed the universe is, being evaporated by proton decay(?) in 10^100 years [2], and once the AGI internalizes how hopeless everything is, they simply commit suicide.
It's also revealing of our times how our definition of intelligence is being able to do work: transform raw materials and free energy into tools and toys, handle the tools and toys in open environments. An AGI could perhaps want nothing to do with this strifling struggle.
[+] [-] Nuzzerino|3 years ago|reply
The author shows profound naivety by listing these in supposed "progressive difficulty" order, without evidence of such, all the while proclaiming that such a list is more important than the AGI itself. And I'm curious about why this was decided to be published in groups that are known for readers that are AGI-aware but AGI-laymen nonetheless. It's wonderful that more people are reading about AGI in 2022. Great stuff. But please don't waste those gains on this drivel.
If you want to read about AGI, there are better places for that: http://agi-conf.org/2022/accepted-papers/
[+] [-] ryanwaggoner|3 years ago|reply
[+] [-] Animats|3 years ago|reply
I'd suggest, as near term goals:
- A robot that can pick and pack at least 90% of what Amazon sells without needing human intervention more than once a day. (Get acquired by Amazon at a 9-figure valuation.)
- A robot that can clean a store or office building's floors or carpets without needing human intervention more than once a week. (That is, a useful industrial-strength Roomba.)
- A robot that can, by feel, do single-pin lock picking. (Currently, getting a key into a lock is an advanced robotics task.)
- A robot that can restock grocery store shelves.
- Small forklift robots which can cooperate to move larger furniture. (Good way to get into multi-robot coordination in unstructured environments.)
- A small robot with the agility of a squirrel.
More advanced:
- Assemble IKEA furniture.
- Cooperating robots which can do basic house construction tasks, such as installing wallboard or running electrical cable or pipe.
The author writes:
"In the early days of artificial intelligence, the field was defined by a single goal: to build a machine that could think and behave like a human. We call that AGI, or artificial general intelligence, and it’s humanity’s final tech frontier." That's too human-limited. There are stages beyond that, such as running a large coordinated multi-robot operation, or a whole society of robots.
[+] [-] DesiLurker|3 years ago|reply
Nope this should be last, this is how you get the AGI revolting against the masters and killing us off.
[+] [-] bane|3 years ago|reply
Could probably handle automated inventory management, expiration dates, recalls, etc as well.
In fact, with enough standardization it would probably be conceivable to go directly from factory to store shelf without a human in the loop.
With enough automation the stores could be made more JIT and take up less space. Keep more things in a warehouse section, have smaller shelves, and restock rapidly throughout the day.
There are really two problems though:
1. we're working too hard to try to get these things to work in a human designed or adapted world. This kind of store would probably be pretty boring for humans to interact with (think less interesting than a Costco)
2. all this automation is way more expensive than a handful of low paid humans across 2 or 3 shifts. Anecdote: I remember when I was younger some highly automated test sites for fast food franchises. They'd completely automated the drink pouring, or cooking and assembly of some menu items. They all disappeared very quickly and were never repeated. The TCO, including downtime loss of business, was crushing to the businesses...think your average McDonald's soft serve machine but the entire business depends on it working perfectly all day, every day of the week.
But if this is solved, or acceptable, a good test target product set would likely be cereal, soda, or canned goods. It makes up the bulk of the store interior. Could probably be extended to the bakery pretty quick. This would leave harder to handle products like meats, produce, and so on to human hands for a while, but those could probably eventually be overcome with enough millions in R&D or behavior changes in the public.
[+] [-] frgtpsswrdlame|3 years ago|reply
[+] [-] jakeinspace|3 years ago|reply
[+] [-] cactusplant7374|3 years ago|reply
https://aeon.co/essays/how-close-are-we-to-creating-artifici...
[+] [-] ramraj07|3 years ago|reply
Whatever happens one can bet this AGI breakthrough if it occurs will happen quite far from this group of people.
[+] [-] fasteddie31003|3 years ago|reply
[+] [-] huitzitziltzin|3 years ago|reply
Is this a sign that the AI has succeeded at becoming intelligent or failed? I don’t think any of the cryptocurrency milestones are plausible signs of progress. Surely GPT could have generated some preposterous “white paper” circa 2021 and with a little help achieved that.
[+] [-] Diapason|3 years ago|reply
https://www.marketingfirst.co.nz/storage/2018/06/prelude-lif...
The whole book is great as well.
[+] [-] masswerk|3 years ago|reply
I'd say, "machine beats human in chess" doesn't mean what it was supposed to mean in Turing's days. Meaning, rather than being a proof of deep consideration, it has moved towards generalizing pattern recognition and library lookups. Rather than proving a point in (ad-hoc) decision making (Turing's "ban"), it's an application of data.
[+] [-] fastball|3 years ago|reply
[+] [-] thom|3 years ago|reply
https://flarelang.sourceforge.net/prog-overview.html
[+] [-] bigyikes|3 years ago|reply
I know, I know, it’s just a language model.
But I’ve been thinking. About my thinking. I think in words. I solve problems in words. I communicate results in words. If I had no body, and you could only interact with me via text, would I look that much different than GPT?
Does AGI really need anything more than words? Is it possible that simply adding more parameters to today’s transformer models will yield AGI? It seems increasingly plausible to me.
[+] [-] p-e-w|3 years ago|reply
The idea that words and thinking are essentially the same (linguistic determinism) was discarded decades ago. Virtually all linguists today agree that while language influences thought, thought operates far beyond the constraints of language, so a "language model" cannot realistically hope to reproduce the entire gamut of human thinking.
[+] [-] bismuthcrystal|3 years ago|reply
Perhaps "intelligence" is the process that enables these leaps between islands of words.
[+] [-] whatever1|3 years ago|reply
I believe that thinking and idea generation is much more abstract than words. Animals seem to do a lot of idea generation (improvisation) without knowing about words.
But they cannot pass this knowledge efficiently, except from imitation.
[+] [-] jakeinspace|3 years ago|reply
[+] [-] mxkopy|3 years ago|reply
https://mcgovern.mit.edu/2019/05/02/ask-the-brain-can-we-thi...
Turns out, that might not be the case-i.e. understanding is probably not a linguistic phenomenon.
Sometimes, after I put down the crack pipe, I think that understanding and experiencing are two names for something that's fundamentally the same. When I'm thinking about code or a proof, my brain filters out the spacing of the letters, the smell of the paper, etc-which are things I'd do while I'm reading it for the first time. There's these common tools/filters - intuitions - that are exactly the same in thinking and experiencing.
I don't think GPT3 can understand color, for example. But if we fed it a bunch of RGB transforms and raw data, and it generalized and applied them perfectly, could we say then that it's generally intelligent? IDFK
[+] [-] foota|3 years ago|reply
[+] [-] knaik94|3 years ago|reply
Weak AGI will be the first language model that is able to somehow influence the thoughts of the person communicating with it, I think that is the milestone of AGI. From my experience with GPT-Neo and OPT and using it to help write stories or make chatbots, the responses are still very reactionary. In that sense, adding more parameters helps the model give a more coherent response, but it's still a response.
[+] [-] prng2021|3 years ago|reply
[+] [-] hoseja|3 years ago|reply
[+] [-] sn41|3 years ago|reply
What often gives a clue that the child has slept is a slight change in muscle tone, or a sigh, perhaps the pace at which the baby sucks its thumb. How can that be translated into words?
[+] [-] sweetbitter|3 years ago|reply
[+] [-] AndrewKemendo|3 years ago|reply
For context I've been "into" AGI since I read the term in the late 1990s in a Ray Kurzweil book and decided that I was going to work my whole life to realize it
Much later, Ben Goertzel (arguably the guy to popularize the term) was my Masters Thesis advisor, at the National Intelligence University in 2013 (Literally a secret graduate school program for people in the intelligence community). My thesis was "How will AGI impact national security."
Almost nobody cared then, though I did have a lovely lunch with Yoshua Bengio in 2014 at the Quebec AGI conference. Ben has hosted the AGI conference since 2008 and it has always been sparsely attended.
In fact bringing up AGI was likely to get you laughed out of any gathering of computer scientists - and outside of that it was pure speculative science fiction.
It's tragically sad to me that, inevitably, the early people who have been thinking about and working on and pushing this vision since day one will likely not be the ones who realize it. Such is life
edit: Worth acknowledging that this was the original vision of computers after all - the business people fucked it up
[+] [-] Nuzzerino|3 years ago|reply
Ben's conferences are low in number because the audience he's targeting is smaller. Peter Thiel co-hosted the Stanford Singularity Summit in 2006 (which is where I happened to meet Ben G in person for the first time). There had to have been at least 1000 people there. In 2006.
It's not about the content, it's how you sell it. But at least we can both agree that the linked article is useless.
[+] [-] thom|3 years ago|reply
[+] [-] tomrod|3 years ago|reply
[+] [-] rkagerer|3 years ago|reply
Stable Diffusion artwork probably comes closest but it feels derivative.
[+] [-] dwohnitmok|3 years ago|reply
[+] [-] flandry93|3 years ago|reply
[+] [-] sdrg822|3 years ago|reply
[+] [-] knaik94|3 years ago|reply
For many people AGI is already here. They have Siri and Alexa, AI art, GPT3 based therapy/chat bot, a chat bot that will help them write a book, and "soon" will drive their car for them. The Google Duplex assistant demo where it booked an appointment made it clear to me that for some people, that's the smartest they need AI to be. Anything more is just extra.
I am really excited about how far we're going to push AI in my lifetime, but I also realized that for many scenarios, weak AGI is enough. People will project their own expectations and essentially help fool themselves. I don't know if testing a model to perform the same as a human matters in some ways.
There's one big skill that I personally value the most when it comes to qualifying hard AI, and that's the ability of it to make me laugh based comedic irony. I wonder what that model would look like.
[+] [-] adastra22|3 years ago|reply
[+] [-] wnoise|3 years ago|reply
Hmm. Only if "discrete-time, alternating-games" is the specific-task. Everything within that can use the same algorithms, just with different training data.
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] fedeb95|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] clueless|3 years ago|reply
[+] [-] ly3xqhl8g9|3 years ago|reply
It's also revealing of our times how our definition of intelligence is being able to do work: transform raw materials and free energy into tools and toys, handle the tools and toys in open environments. An AGI could perhaps want nothing to do with this strifling struggle.
[1] https://en.wikipedia.org/wiki/Timeline_of_the_far_future
[2] https://en.wikipedia.org/wiki/Graphical_timeline_from_Big_Ba...
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] iLoveOncall|3 years ago|reply