Seeing the guy who couldn't deliver FSD decide to make analogies from FSD to AGI does actually give me confidence we are decades away.
Yes, yes, I know there's like 1-2 companies that have highly modified vehicles that are pretty good, in a limited geofenced area, in good weather, at low speed, driving conservatively, local roads, most of the time. This is not "FSD".
They've been making very impressive incremental improvements every few years for sure. I had a Tesla for nearly 5 years and it was "wow" at first, and then "heh, I guess it's a little better" every year after that.
But when can I get in a taxi at JFK or on 5th Ave and get robotaxied through city streets, urban highway, off into the far suburbs? Could be a decade, if it happens. Just because we were able to make horses faster doesn't mean we flew horses to the moon.
Apply the same "sorta kinda almost" definition to AGI and yeah sure, maybe in 10 years. Really really actually solved? Hah.
AGI has become a philosophical term in the way you are using it. Which is fine to discuss philosophy, but to the point of the article, AI enabled automation is beginning to have significant impact on the economy due to the new functionality.
>But when can I get in a taxi at JFK or on 5th Ave and get robotaxied through city streets, urban highway, off into the far suburbs? Could be a decade, if it happens. Just because we were able to make horses faster doesn't mean we flew horses to the moon.
Having ridden in a lot of waymo's which can handle SF (urban stuff) and the phoenix area (highways and suburban stuff) perfectly well, I feel quite confident that that could happen right now.
I think what Andrej is describing is more "automation" than AGI. His discussion of self-driving is more analogous to robots building cars in a Tesla factory displacing workers than anything AGI. We've already had "self driving" trains where we got rid of the human train driver. Nothing "AGI" about that. The evolution of getting cars to self drive not necessarily making the entity controlling the car more human-like intelligent. It's more like meeting in between the human driver and the factory robot +/- some technology.
So how to define AGI? I'm not sure economic value factors here. I would lean towards a definition around problem solving. When computers can solve general problems as well as humans, that's AGI. You want to find a drug for cancer, or drive a car, or prove a math theorem, or write a computer program to accomplish something, or whatever problems humans solve all the time. (EDIT: or reason about what problems need to be solved as part of addressing other problems.) There's already classes of problems, like chess, where computers outperform humans. But I mean calculators did that for arithmetic a long time ago. The "G" part is whether or not we have a generalized computer that excels at everything.
It's a meaningless distinction. You basically get sucked into a "what has AI ever done for us?" style debate analogous to Monty Python's Life of Brian. It's impossible to resolve. But the irony of course is the huge and growing list of things it is actually doing quite nicely.
We'll have decently smart AIs before we nail down what that G actually means, should mean, absolutely cannot mean, etc. Which is usually what these threads on HN devolve into. Andrej Karpathy is basically side stepping that debate and using self driving as a case study for two simple reasons: 1) we're already doing it (which is getting hard to deny or nitpick about) and 2) it requires a certain level of understanding of things around us that goes beyond traditional automation.
You are dismissing self driving as mere "automation". But that of course applies to just about everything we do with computers. Driving is sufficiently hard that it seems to require the best minds many years to get there and we're basically getting people like Andreij Karpathy and his colleagues from Google, Waymo, Microsoft, Tesla, etc. bootstrapping a whole new field of AI as a side effect. The whole reason we're even talking about AGI is those people. The things you list, most people cannot do either. Well over 99% of the people I meet are completely useless for any of those things. But I wouldn't call them stupid for that reason.
Some people even go as far to say that we won't nail self driving without an AGI. But then since we already have some self driving cars that are definitely not that intelligent yet, they are probably wrong. For varying definitions of the G in AGI.
I recall Norvig's AI book preaching decades ago that "intelligent" does not mean able to do everything, and that for an agent to be useful it was enough to solve a small problem.
Which in my mind is where the G came from.
And yet we now suddenly go back to the old narrow definition?
I still see no path from LLMs and autonomous driving to AGI.
Just like the term "AI" was co-opted and ruined, "AGI" has now been co-opted and ruined, and we're going to need a replacement term to describe that concept.
> I think what Andrej is describing is more "automation" than AGI
I think you're basically right - incrementally automating aspects of one human job. However, it really ought to include AGI since I personally would never trust my life to an autonomous car they didn't have human-level ability to react appropriately to an out-of-training-set emergency.
"AGI: An autonomous system that surpasses human capabilities in the majority of economically valuable work." -- what an obscenely depressing reduction of a fascinating field of inquiry. who the hell snuck in and redefined the science of thinking machines to this sad and reductive get rich quick crap?
I think that definition is useful because it is measurable. It sidesteps the endless "It's just a text prediction engine/ I dunno ChatGPT seems pretty smart to me!" discussions. It also sidesteps the "It did well on a test designed to measure human intelligence it must be smarter than humans"/ "no, the test of human intelligence wasn't designed to measure machine intelligence and tells us very little" discussion.
It reduces it to "Can I fire 50% of my workforce? Then it must be AGI."
Now maybe this definition isn't so useful either, because a lot of work requires a body, to say, move physical goods, which has little to do with "intelligence" but I can see the appeal of looking for some sort of more objective measure of whether you have achieved AGI.
Agreed. A lot of the things that we humans do are not economically valuable. Yet, those help us survive, evolve and thrive as human being and the most intelligent species known to us.
You seem to be confusing the economy with capitalism or money in general? AGI is potentially a post-money technology if you take it to the limit. Economy is a way of improving society. Money was useful in this use case for a few thousand years but might not be anymore; the economy will still have to work, though.
Preach,my friend! This is the most reductive and disgusting distillation of the human experience I've read here recently...and I've followed quite a few EA threads as their founders were imprisoned ;)
I don't things like "full self driving" are meaningful (and probably also AGI), because in reality it isn't a binary thing, rather it's a spectrum of power based on error rate and problem space coverage. Waymo self driving works within a defined subset of the problem space, we can stick a goalpost in the sand in term of the known problem space and error rates and say that represents "full self driving" but the reality is the problem space is less bounded than we'd like to think. We might find what we think of as full self driving and AGI turn out to be highly detailed facades when new areas of problem space are explored.
For example, imagine a full self driving car trying to get out of a city that's flooding due to heavy rains, while having to compete with people fleeing to higher ground on foot. People can generalize that way but FSD is gonna take a shit, and if you don't know how to drive in that situation so are you.
> Waymo self driving works within a defined subset of the problem space
"works" includes a failure mode of "alert a human and ask them to take over."
> when new areas of problem space are explored.
The problem space is that the "rules of the road" are both legal, technical and social. All of which have internal conflicts as well as conflicts among each other. Anyone who has driven in severe weather has realized this in one way or another.
> For example, imagine a full self driving car trying to get out of a city that's flooding due to heavy rains, while having to compete with people fleeing to higher ground on foot.
Why do I find this easier to imagine in the fictional setting of Elysium than on the real Earth?
People can't do that either. Some years ago there was a massive snowfall in Rome, where it seldom snows ever, people don't generally carry snow chains, and there's few snowplows and such.
Many people reacted by abandoning their cars in the middle of the road, which is basically what I'd expect any FSD vehicle to do.
That's a great point! In aviation we could easily call major jet liners "full self flying" if they wanted to market them as such but we still require TWO highly trained technicians in the piolets seats at all times!
The very beginning of the article discusses what "full self driving" means and also points out how important it is to define terms. I'm not sure your comment is a fair response to this particular article.
The issue with FSD systems as they are implemented today is that they aren't AI as much as just complex control algorithms. You can only go so far with mapping sequences of world snapshots to control actions.
I do think that once we start to investigate ML/AI structure in the direction of figuring out the correct solution rather than trying to just find functions for control algorithms based on input->output mappings, then a lot of these problems are going to disappear.
I can, in principle, take someone from a group of uncontacted peoples, put them into New York and let them figure out how to drive and they likely will be able to do it after not too much time. We are not even close to any technology that could figure out driving having never been built for it.
The humans have probably been learning how to negotiate the physical world for a decade or more. Also humans have evolved to be good at that stuff. That self driving tech also has to be designed and trained is sort of a similar deal.
But that is irrelevant to the point of the article.
Maybe it takes 1 million hours of computing to train a model that can generate a logo, but an average human could have learned how to do that in just 50 hours of training with Photoshop.
The point of the article is that now for pennies users can generate logos in seconds that would have previously cost hundreds of dollars and days of back and forth with a designer.
Sure, it could be the case that AGI is developed progressively and slowly, the way we're seeing Waymo build autonomous vehicles. But that's just one way amongst many, and you could see it arrive suddenly through very different means, as is possible to imagine with scaled up LLMs.
I really wish people would consider all the possibilities, and assign their relative probability weights to them. Is Karpathy 100% sure it will be like self-driving cars? 50%?
As a driver you not only see cars. You apply theory of mind to each car, assigning a personality to each moving car, even if you can't see the driver.
Let's see you see a car full of bumps, marks and broken lights. You might think: this car has crashed before, I am going to avoid it. Or a car with racing parts and decals and tinted windows, you know that car likes to accelerate faster than usual and may be unsafe to be around. Or you see a SUV with baby on board stickers, you'll know that if you are going to crash you may try to crash that car last because it has babies inside, etc...
So humans don't see objects they see the whole situation, unconsciously even.
We have remotely driven cars, that they market as self-driving.
Even the crowd here fall for it. Downvote and then go read their term of service and look for "safety driver" and "remote" and "fleet response specialists". Then go cry about your waymo invesment.
Lately, whilst playing Zelda TotK (works for BotW as well), I was thinking that a good test to see if you have AGI would be letting it solve all the shrines. They require real world knowledge, sometimes rather "deep" logic and the ability to use previously unseen capabilities. Of course the AGI should not be able to have a million tries at it, RL style. Just use the "shrine test" as a regular test set. I believe one would have a pretty nice virtual proxy for a general intelligence test.
From the article, I find it strange that AGI often de facto implies "super intelligence". It should be 2 distinct concepts. I find that GPT-4 is close to a general intelligence, but far from a super intelligence. Succeeding at just general intelligence would be amazing, but I don't believe it means super intelligence is just a step away.
This also brings me to a point I don't see discussed a lot which is simulation (NOT in "we live in a simulation" sense). Let's say I have AGI, it passes the above mentioned shrine test, or any other accepted test. Now I'd like to tell it "find a way to travel faster than light" for example. The AGI would first be limited by our current knowledge, but could potentially find a new way. In order to find a new way it would probably need to conduct experiments and adjust its knowledge based on these experiments. If the AGI cannot run on a good enough simulation, then what it can discover will be rather limited, at least time-wise most likely quality wise. I'm thinking this falls back to Wolfram's computational irreducibility. Even if we managed a super general intelligence, it will be limited by physics of the world we live in sooner rather than later.
The reason AGI is often equated with runaway intelligence is that once you get to a space where your computer can do what you do, it can improve itself instead of relying on you to do it. That improvement then becomes bounded by processing power and time, and is constantly accelerating.
If find it amazing how GPT-4 is good at even abstract "reasoning" as long as you present the problem as a story. Some problems can't plausibly be presented as a story ofc, also there is no way to automatically convert something into a story.
getting a machine to do that sort of real-time spatial reasoning may well be harder than getting it to tell you the meaning of life or whatever. brains are inextricable from the evolution of directed locomotion. several species of sessile tunicates begin life as a motile larva that reabsorbs a significant portion of its cerebral ganglion once it settles down. BDNF is released in humans upon physical activity. the premotor cortex dwarfs wernicke's area. and no "AI" development that's been hyped in the past decade as intelligent could be usefully strapped to a boston dynamics dog.
I now come to realize that if you don't want to drive it's better to have public transport. For the real fun part of owning a personal vehicle: sport car, road-trip... I doubt you would want a robot to take over.
You can already experience self driving cars by taking an Uber or a taxi, or being chauffeured if you are richer. None of that is new, the self driving aspect just promises to make those experiences perhaps more accessible (or at the very least, not less accessible than they are now). For example, I took a taxi to and from work every day when I lived in Beijing, which came to about 100 kuai/day for a 20-30 minute drive each way, which is affordable to a lot of people (although only possible due to cheap labor). I wouldn’t mind being driven to work here in the states, although it isn’t really economically feasible (and perhaps should be replaced with direct public transit if that was time competitive, which it isn’t, but could be).
I don't really like driving per se, but public transport, regardless of its sophistication (for example, as seen in Tokyo), has its challenges, particularly when it comes to grocery shopping. Transporting a large quantity of goods can be impractical, if not impossible, without a car. Even carrying a moderate amount can be exhausting due to the 'last 100 meter' issue, which persists even if one lives close to a metro station, say within a five-minute walk.
Moreover, public transport often isn't as comfortable as your own vehicle (which I understand is a luxury).
Conversely, when it comes to driving in a large city, finding a parking spot can often be a major hassle.
So this makes me wonder if I were running society what questions should I ask...
- what Automation initiatives never hit "take off"? I mean, like for Nuclear Fusion, human interplanetary exploration, and Quantum Computing there's some chance that the technology simply remains beyond us "forever", I guess that "forever" means more than the lifetime of the people who start the journey... or maybe actually just beyond humans full stop. We should admit there is a non-zero chance that FSD is one of these failing quests, even if a rational observer would have to say that that chance does seem to be shrinking and close enough to 0 to instill some confidence. Perhaps domestic robotics, auto-doctors, robot-manufacturing, programming, drug development will playout to automation - but maybe not.
- how do we consider the utilisation of the resources to do this? FSD has been very expensive so far, it's consumed lots of investment capital and lots of human creativity. Was that investment rational given where we stand? If society had held off and invested minimally from 2000->2024 how much would that have delayed the technology in reality? Or is it the other way round? Has the FSD investment facilitated the development of other technologies and created a 1->1 acceleration (for every year of 2000->2024 it's brought FSD a year closer than it would have been, so a cold start this year would mean FSD by 2050 or similar, whereas if we keep going then we can expect FSD by e.g. 2026)
- how do we value these outcomes? Are these unalloyed goods, or are some worse than the status-quo? It could be argued that the development of some technologies left the world worse off than before - smoking, social media, personal automobiles (I know this is politically charged but I am just using examples others have raised before). Can we choose rationally, especially if a large scale intervention and development process is required to realise these outcomes?
We can probably all agree than an AGI should be able to form questions, or more generally seek out information that it needs to figure out the answer in some form and way.
Not only are there no LLMs in existence today can do this without explicit action mapping, but the mechanism for storing that piece of information would rely on doing a large number of training runs for transfer learning to retain that information, and we humans don't actually work like that.
People like to shit on the Turing test, but if you step back from the subjective judgement angle, and instead imagine that the person performing the Turing test is a scientist trying to collect evidence that the agent that it is communicating with is _NOT_ intelligent/human, it is actually still very relevant. Tools like statistical analysis of output and responses to jailbreak prompts and recursive/self referential prompts designed to confuse machines and generate emotional responses from humans could be used to generate probability of human/not human in a much more rigorous way.
Try, "ChatGPT, what do you think about this song"...
LLMs do not constitute "AI" let alone the more rigorous AGI. They are a GREAT statistical parlor trick for people that don't understand statistics though.
I think I broadly agree with how Karpathy thinks AGI will roll out with the exception of this bit:
> Some people get really upset about it, and do the equivalent of putting cones on Waymos in protest, whatever the equivalent of that may be. Of course, we’ve come nowhere close to seeing this aspect fully play out just yet, but when it does I expect it to be broadly predictive.
I think the equivalent of putting cones on Waymos in protest will involve large scale protests and civil unrest in some places. I think people will die (inadvertently?) because companies will act to put inadequately tested self-preservation modes in their hardware device to protect against aggressive and organized vandalism.
As others pointed out he seems to be talking more about about automation which... sure that's a fine discussion to have, but what bugged me more than anything is the overselling of Waymo/FSD. I understand a lot of this is also on a spectrum but it seems a bit irresponsible of Karpathy not to mention crashes Waymo has faced or other problems FSD systems have faced. It's not just an issue of scaling up, sensors, etc. There is more engineering work that needs to be put in clearly. It's fine to bring it up in his example of reactions to economical forces, but let's be completely honest about the whole thing.
It seems to me that Andrej is predicting how AGI will impact society by extrapolating from the current societal impacts of self driving.
I don’t get the sense he was trying to say that self-driving automation is the exact same as AGI. Mainly that that AGI, like other technologies before it, will displace some jobs and create new ones but this will require companies to figure out how to scale the technology.
I do think this is still very optimistic. If indeed AGIs can think and learn on their own it isn’t hard to envision a future where humans aren’t needed at all in the loop.
Whenever I am trying to figure what is true and/or good for me, there are some people who want to help me and others who want to help themselves (sell me cigarettes, negative sum politics, etc). This is the battle where AGI seems spooky - eventually people are not able to tell which way is up.
We should consider the OODA loop of a person's self-determination separately from the menial tasks a person undertakes to make a living. Automating a task is totally different than breaking a person's ability to self-orient.
It seems to me to just be another iteration of dealing with uncertain information: our neighbors may lie, our leaders may lie, newspaper may lie, radio may lie, TV may lie, blogs may lie, social networks may lie, pictures are photoshopped, videos are deepfaked..
At each iteration we had some problems but we adapted, it's one thing we're good at.
Until recently I thought self driving was not going to happen. But in the early days of the car's history someone had to walk in front of the car as it went along, waving a red flag to warn people of this mechanical monstrosity.
And now we have substantial societal adaptations, both legal and structural to support ubiquitous vehicular transport.
Similar changes are on the way to support self driving. Our environment will be adapted to make it easier to implement self driving. And for that we won't need AGI.
Jaywalking is a crime thanks to the car. Who knows what we're not going to be allowed to do soon because of self driving.
By that definition though, if we were anywhere close... I'd expect a peer AI power & authoritarian regime like China, which also focusses on EVs, to have some tier 2 city with robotaxi-only mandate by now and a working model for what it looks like.
Yet there are no signs of that. If anything they appear to be behind us.
He seems to be talking mostly about the impact on society
>When your Waymo is driving through the streets of SF, you’ll see many people look at it as an oddity... Then they seem to move on with their lives.
>When full autonomy gets introduced in other industries....they might stare and then shrug...
Which I guess is ok on a small scale but if AGI starts to replace all human jobs it will have a different effect to Waymo firing some drivers and hiring AI researchers.
Self driving is also a good example from a regulation viewpoint and societal interaction. Unfortunately the article is very America - centric and ignores e.g. Mercedes progress, German regulation and the competition in China.
I don't trust any discussion on this topic anymore.
When I was much younger, "AI" was what "AGI" is now. Now people started using "AGI" for "cars with several sensors and okay algorithms for collision detection" and then you have loud advocates going on obviously logically broken rants about the nature of "actual" intelligence -- and those are philosophical and not scientific.
But still, we don't have anything even 1% close to AGI. And no, Chess and Go have NEVER EVER been about AGI. I have no idea how people ever mistook "combinatorics way beyond what the human brain can do" with "intelligent thought" but that super obvious mistake also explains the state of the AI sector these days, I feel.
So before long, I guess we'll need another term, probably AGIFRTTWP == Artifical General Intelligence, For Real This Time, We Promise.
And then we'll start adding numbers to it. So I am guessing Skynet / Transcendence level of AI will be at about AGIFRTTWP-6502.
As for the state of this "industry", what's going on is that people with marketing chops and vested interests hijack word meanings. Nothing new, right? But it also kills my motivation to follow anything in the field. 99.9% are just loud mouths looking for the next investment round with absolutely nothing to show for it. I think I saw on YouTube military-sponsored autonomous cars races 5+ years ago (if not 10) where they did better than what the current breed of "autonomously driving cars" are doing.
Will there be even one serious discussion about the general AI that you can put in a robot body and it can learn to clean, cook, repair and chat with you? Of course not, let's focus on yet-another-philosophical debate while pretending it's a scientific one.
As a bystander -- not impressed. You all who are in this field should be ashamed of yourselves.
I don't know how much long ago was "When I was much younger" but even if we go as far back as the 1960s and look at the Artificial Intelligence scientific literature of that time, you'd find the understanding of what the terms they defined back then to be something far closer to what we have now, not your expectation of "the general AI that you can put in a robot body and it can learn to clean, cook, repair and chat with you"; and the philosophy has always been a key part of the science of AI even before I was born.
I'm not seeing any drift of terms here - the only thing that seems to be happening for AI and AGI terms is correcting for what has happened in the sci-fi media and bringing the usage back to what it always has been in the computer science literature, now that it's closer to reality than mere fiction.
Sounds like you are looking for an Android in the style of BladeRunner. That would be cool, but I don't understand why you are against LLMs and FSD being labeled as AI. They are using neural networks to generate content and drive cars in ways that humans find valuable.
Something tells me Karpathy rarely uses the “FSD” in his Tesla. He barely mentioned Tesla FSD in the blog despite being a key leader in the project. Perhaps he’d like to forget about it all together..
Maybe Elon really screwed the project by forcing the use of video cams and Karpathy is still salty about it.
Actually if you read the blog he addresses Waymo and Tesla's strategy. He says the barriers to Tesla scaling are software and Waymo needs to scale hardware. He then implies that software will win the scaling race
I think most folks use FSD for what it was intended for, L2 driving. It's a horrible name but it's not crazy to think maybe they'll get their shit together and figure out how to get to L4.
They might just learn they need to add back modalities they neglected previously, or explore some new ones.
I think historically this has never happened without being followed by appearance of (more) new, previously not existing, jobs. Industrial revolution brought machine operators, computer revolution brought computer operators etc..
This is often said, but isn’t labor force participation at the lowest level ever? And aren’t a good fraction of jobs so called bull shit jobs that could be eliminated with often a positive net result, particularly gov etc.
steveBK123|2 years ago
Yes, yes, I know there's like 1-2 companies that have highly modified vehicles that are pretty good, in a limited geofenced area, in good weather, at low speed, driving conservatively, local roads, most of the time. This is not "FSD".
They've been making very impressive incremental improvements every few years for sure. I had a Tesla for nearly 5 years and it was "wow" at first, and then "heh, I guess it's a little better" every year after that.
But when can I get in a taxi at JFK or on 5th Ave and get robotaxied through city streets, urban highway, off into the far suburbs? Could be a decade, if it happens. Just because we were able to make horses faster doesn't mean we flew horses to the moon.
Apply the same "sorta kinda almost" definition to AGI and yeah sure, maybe in 10 years. Really really actually solved? Hah.
gitfan86|2 years ago
smus|2 years ago
Having ridden in a lot of waymo's which can handle SF (urban stuff) and the phoenix area (highways and suburban stuff) perfectly well, I feel quite confident that that could happen right now.
icpmacdo|2 years ago
Reductive and rude phrasing.
YZF|2 years ago
So how to define AGI? I'm not sure economic value factors here. I would lean towards a definition around problem solving. When computers can solve general problems as well as humans, that's AGI. You want to find a drug for cancer, or drive a car, or prove a math theorem, or write a computer program to accomplish something, or whatever problems humans solve all the time. (EDIT: or reason about what problems need to be solved as part of addressing other problems.) There's already classes of problems, like chess, where computers outperform humans. But I mean calculators did that for arithmetic a long time ago. The "G" part is whether or not we have a generalized computer that excels at everything.
jillesvangurp|2 years ago
We'll have decently smart AIs before we nail down what that G actually means, should mean, absolutely cannot mean, etc. Which is usually what these threads on HN devolve into. Andrej Karpathy is basically side stepping that debate and using self driving as a case study for two simple reasons: 1) we're already doing it (which is getting hard to deny or nitpick about) and 2) it requires a certain level of understanding of things around us that goes beyond traditional automation.
You are dismissing self driving as mere "automation". But that of course applies to just about everything we do with computers. Driving is sufficiently hard that it seems to require the best minds many years to get there and we're basically getting people like Andreij Karpathy and his colleagues from Google, Waymo, Microsoft, Tesla, etc. bootstrapping a whole new field of AI as a side effect. The whole reason we're even talking about AGI is those people. The things you list, most people cannot do either. Well over 99% of the people I meet are completely useless for any of those things. But I wouldn't call them stupid for that reason.
Some people even go as far to say that we won't nail self driving without an AGI. But then since we already have some self driving cars that are definitely not that intelligent yet, they are probably wrong. For varying definitions of the G in AGI.
riffraff|2 years ago
I recall Norvig's AI book preaching decades ago that "intelligent" does not mean able to do everything, and that for an agent to be useful it was enough to solve a small problem.
Which in my mind is where the G came from.
And yet we now suddenly go back to the old narrow definition?
I still see no path from LLMs and autonomous driving to AGI.
moogly|2 years ago
HarHarVeryFunny|2 years ago
I think you're basically right - incrementally automating aspects of one human job. However, it really ought to include AGI since I personally would never trust my life to an autonomous car they didn't have human-level ability to react appropriately to an out-of-training-set emergency.
a-dub|2 years ago
staticman2|2 years ago
It reduces it to "Can I fire 50% of my workforce? Then it must be AGI."
Now maybe this definition isn't so useful either, because a lot of work requires a body, to say, move physical goods, which has little to do with "intelligence" but I can see the appeal of looking for some sort of more objective measure of whether you have achieved AGI.
nl|2 years ago
OpenAI, back in 2018: https://openai.com/charter
It wasn't particularly controversial at the time - didn't get mentioned in the HN discussion: https://news.ycombinator.com/item?id=16794194
ActorNightly|2 years ago
esalman|2 years ago
baq|2 years ago
Mythrandir|2 years ago
CuriouslyC|2 years ago
For example, imagine a full self driving car trying to get out of a city that's flooding due to heavy rains, while having to compete with people fleeing to higher ground on foot. People can generalize that way but FSD is gonna take a shit, and if you don't know how to drive in that situation so are you.
akira2501|2 years ago
"works" includes a failure mode of "alert a human and ask them to take over."
> when new areas of problem space are explored.
The problem space is that the "rules of the road" are both legal, technical and social. All of which have internal conflicts as well as conflicts among each other. Anyone who has driven in severe weather has realized this in one way or another.
> For example, imagine a full self driving car trying to get out of a city that's flooding due to heavy rains, while having to compete with people fleeing to higher ground on foot.
Why do I find this easier to imagine in the fictional setting of Elysium than on the real Earth?
riffraff|2 years ago
People can't do that either. Some years ago there was a massive snowfall in Rome, where it seldom snows ever, people don't generally carry snow chains, and there's few snowplows and such.
Many people reacted by abandoning their cars in the middle of the road, which is basically what I'd expect any FSD vehicle to do.
kylebenzle|2 years ago
blovescoffee|2 years ago
ActorNightly|2 years ago
I do think that once we start to investigate ML/AI structure in the direction of figuring out the correct solution rather than trying to just find functions for control algorithms based on input->output mappings, then a lot of these problems are going to disappear.
ycdxvjp|2 years ago
quonn|2 years ago
steveBK123|2 years ago
tim333|2 years ago
gitfan86|2 years ago
Maybe it takes 1 million hours of computing to train a model that can generate a logo, but an average human could have learned how to do that in just 50 hours of training with Photoshop.
The point of the article is that now for pennies users can generate logos in seconds that would have previously cost hundreds of dollars and days of back and forth with a designer.
This dynamic is going to flow through the economy
arielzj|2 years ago
I really wish people would consider all the possibilities, and assign their relative probability weights to them. Is Karpathy 100% sure it will be like self-driving cars? 50%?
29athrowaway|2 years ago
Let's see you see a car full of bumps, marks and broken lights. You might think: this car has crashed before, I am going to avoid it. Or a car with racing parts and decals and tinted windows, you know that car likes to accelerate faster than usual and may be unsafe to be around. Or you see a SUV with baby on board stickers, you'll know that if you are going to crash you may try to crash that car last because it has babies inside, etc...
So humans don't see objects they see the whole situation, unconsciously even.
iknowstuff|2 years ago
wodenokoto|2 years ago
I was quite surprised by this sentence, as I thought we didn't have self driving cars. Have I been sleeping under a rock?
dave4420|2 years ago
He’s overselling how mature the technology is.
1oooqooq|2 years ago
Even the crowd here fall for it. Downvote and then go read their term of service and look for "safety driver" and "remote" and "fleet response specialists". Then go cry about your waymo invesment.
maaaaattttt|2 years ago
From the article, I find it strange that AGI often de facto implies "super intelligence". It should be 2 distinct concepts. I find that GPT-4 is close to a general intelligence, but far from a super intelligence. Succeeding at just general intelligence would be amazing, but I don't believe it means super intelligence is just a step away.
This also brings me to a point I don't see discussed a lot which is simulation (NOT in "we live in a simulation" sense). Let's say I have AGI, it passes the above mentioned shrine test, or any other accepted test. Now I'd like to tell it "find a way to travel faster than light" for example. The AGI would first be limited by our current knowledge, but could potentially find a new way. In order to find a new way it would probably need to conduct experiments and adjust its knowledge based on these experiments. If the AGI cannot run on a good enough simulation, then what it can discover will be rather limited, at least time-wise most likely quality wise. I'm thinking this falls back to Wolfram's computational irreducibility. Even if we managed a super general intelligence, it will be limited by physics of the world we live in sooner rather than later.
krageon|2 years ago
ImHereToVote|2 years ago
0x69420|2 years ago
3asdf123|2 years ago
seanmcdirmid|2 years ago
thrdbndndn|2 years ago
Moreover, public transport often isn't as comfortable as your own vehicle (which I understand is a luxury).
Conversely, when it comes to driving in a large city, finding a parking spot can often be a major hassle.
sgt101|2 years ago
- what Automation initiatives never hit "take off"? I mean, like for Nuclear Fusion, human interplanetary exploration, and Quantum Computing there's some chance that the technology simply remains beyond us "forever", I guess that "forever" means more than the lifetime of the people who start the journey... or maybe actually just beyond humans full stop. We should admit there is a non-zero chance that FSD is one of these failing quests, even if a rational observer would have to say that that chance does seem to be shrinking and close enough to 0 to instill some confidence. Perhaps domestic robotics, auto-doctors, robot-manufacturing, programming, drug development will playout to automation - but maybe not.
- how do we consider the utilisation of the resources to do this? FSD has been very expensive so far, it's consumed lots of investment capital and lots of human creativity. Was that investment rational given where we stand? If society had held off and invested minimally from 2000->2024 how much would that have delayed the technology in reality? Or is it the other way round? Has the FSD investment facilitated the development of other technologies and created a 1->1 acceleration (for every year of 2000->2024 it's brought FSD a year closer than it would have been, so a cold start this year would mean FSD by 2050 or similar, whereas if we keep going then we can expect FSD by e.g. 2026)
- how do we value these outcomes? Are these unalloyed goods, or are some worse than the status-quo? It could be argued that the development of some technologies left the world worse off than before - smoking, social media, personal automobiles (I know this is politically charged but I am just using examples others have raised before). Can we choose rationally, especially if a large scale intervention and development process is required to realise these outcomes?
SnazzyJeff|2 years ago
Of course, there isn't much money in teaching a bot that only knows english chinese.
EDIT, Wikipedia page for context: https://en.wikipedia.org/wiki/Chinese_room
ActorNightly|2 years ago
Not only are there no LLMs in existence today can do this without explicit action mapping, but the mechanism for storing that piece of information would rely on doing a large number of training runs for transfer learning to retain that information, and we humans don't actually work like that.
CuriouslyC|2 years ago
kylebenzle|2 years ago
LLMs do not constitute "AI" let alone the more rigorous AGI. They are a GREAT statistical parlor trick for people that don't understand statistics though.
kerim-ca|2 years ago
nl|2 years ago
> Some people get really upset about it, and do the equivalent of putting cones on Waymos in protest, whatever the equivalent of that may be. Of course, we’ve come nowhere close to seeing this aspect fully play out just yet, but when it does I expect it to be broadly predictive.
I think the equivalent of putting cones on Waymos in protest will involve large scale protests and civil unrest in some places. I think people will die (inadvertently?) because companies will act to put inadequately tested self-preservation modes in their hardware device to protect against aggressive and organized vandalism.
gwoolhurme|2 years ago
photon_collider|2 years ago
I don’t get the sense he was trying to say that self-driving automation is the exact same as AGI. Mainly that that AGI, like other technologies before it, will displace some jobs and create new ones but this will require companies to figure out how to scale the technology.
I do think this is still very optimistic. If indeed AGIs can think and learn on their own it isn’t hard to envision a future where humans aren’t needed at all in the loop.
etwigg|2 years ago
We should consider the OODA loop of a person's self-determination separately from the menial tasks a person undertakes to make a living. Automating a task is totally different than breaking a person's ability to self-orient.
riffraff|2 years ago
It seems to me to just be another iteration of dealing with uncertain information: our neighbors may lie, our leaders may lie, newspaper may lie, radio may lie, TV may lie, blogs may lie, social networks may lie, pictures are photoshopped, videos are deepfaked..
At each iteration we had some problems but we adapted, it's one thing we're good at.
tdalaa|2 years ago
jjri|2 years ago
jocoda|2 years ago
And now we have substantial societal adaptations, both legal and structural to support ubiquitous vehicular transport.
Similar changes are on the way to support self driving. Our environment will be adapted to make it easier to implement self driving. And for that we won't need AGI.
Jaywalking is a crime thanks to the car. Who knows what we're not going to be allowed to do soon because of self driving.
steveBK123|2 years ago
Yet there are no signs of that. If anything they appear to be behind us.
bilsbie|2 years ago
These days people seem to define if more as artificial super intelligence.
dopidopHN|2 years ago
tim333|2 years ago
>When your Waymo is driving through the streets of SF, you’ll see many people look at it as an oddity... Then they seem to move on with their lives.
>When full autonomy gets introduced in other industries....they might stare and then shrug...
Which I guess is ok on a small scale but if AGI starts to replace all human jobs it will have a different effect to Waymo firing some drivers and hiring AI researchers.
threeseed|2 years ago
Humans are able to move our heads to infer depth and resolve issues like occlusion.
No amount of AGI can solve those if we say take a Tesla and the cameras are low quality, fixed and limited in number.
And the same hardware question applies to a lot of use cases for AGI.
arlecks|2 years ago
kylebenzle|2 years ago
snowpid|2 years ago
buzzert|2 years ago
ShamelessC|2 years ago
nurettin|2 years ago
pdimitar|2 years ago
When I was much younger, "AI" was what "AGI" is now. Now people started using "AGI" for "cars with several sensors and okay algorithms for collision detection" and then you have loud advocates going on obviously logically broken rants about the nature of "actual" intelligence -- and those are philosophical and not scientific.
But still, we don't have anything even 1% close to AGI. And no, Chess and Go have NEVER EVER been about AGI. I have no idea how people ever mistook "combinatorics way beyond what the human brain can do" with "intelligent thought" but that super obvious mistake also explains the state of the AI sector these days, I feel.
So before long, I guess we'll need another term, probably AGIFRTTWP == Artifical General Intelligence, For Real This Time, We Promise.
And then we'll start adding numbers to it. So I am guessing Skynet / Transcendence level of AI will be at about AGIFRTTWP-6502.
As for the state of this "industry", what's going on is that people with marketing chops and vested interests hijack word meanings. Nothing new, right? But it also kills my motivation to follow anything in the field. 99.9% are just loud mouths looking for the next investment round with absolutely nothing to show for it. I think I saw on YouTube military-sponsored autonomous cars races 5+ years ago (if not 10) where they did better than what the current breed of "autonomously driving cars" are doing.
Will there be even one serious discussion about the general AI that you can put in a robot body and it can learn to clean, cook, repair and chat with you? Of course not, let's focus on yet-another-philosophical debate while pretending it's a scientific one.
As a bystander -- not impressed. You all who are in this field should be ashamed of yourselves.
PeterisP|2 years ago
I'm not seeing any drift of terms here - the only thing that seems to be happening for AI and AGI terms is correcting for what has happened in the sci-fi media and bringing the usage back to what it always has been in the computer science literature, now that it's closer to reality than mere fiction.
gitfan86|2 years ago
jackblemming|2 years ago
Maybe Elon really screwed the project by forcing the use of video cams and Karpathy is still salty about it.
enslavedrobot|2 years ago
kajecounterhack|2 years ago
They might just learn they need to add back modalities they neglected previously, or explore some new ones.
Animats|2 years ago
freediver|2 years ago
yeknoda|2 years ago
freediver|2 years ago
It paints not so rosey picture about it: https://www.youtube.com/watch?v=-Rxvl3INKSg
omeze|2 years ago
carbocation|2 years ago
alooPotato|2 years ago