Feels too self-congratulatory when he claims to be correct about self driving in the Waymo case. The bar he set is so broad and ambiguous, that probably anything Waymo did, would not qualify as self driving to him. So he think humans are intervening once every 1-2 miles to train the Waymo, we’re not even sure if that is true, I heard from friends that it was 100+ miles but let us say Waymo comes out and says it is 1000 miles.
Then I bet Rodney can just fiddle with goal post and say that 3.26 trillion miles were driven in US in 2024, and having a human intervene 1000 miles would mean 3.26 billion interventions, and that this is clearly not self driving. In fact until Waymo disables Internet on all cars and prices it never needs any intervention ever, Rodney can claim he’s right, even then maybe not stopping exactly where Rodney wanted it to, might be proof that self driving doesn’t work.
Next big thing after deep learning prediction is clearly false. LLM is deep learning, scaled up, we are not in any sense looking past deep learning. Rodney I bet wanted it to be symbolic AI, but that is most likely a dead end, and the bitter lesson actually holds. In fact we have been riding this deep learning wave since Alex-Net 2012. OpenAI talked about scaling since 2016 and during that time the naysayers could be very confident and claim we needed something more, but OpenAI went ahead and proved out the scaling hypothesis and passed the language Turing test. We haven’t needed anything more except scale and reasoning has also turned out to be similar. Just an LLM trained to reason, no symbolic merger, not even a search step it seems like.
Waymo cars can drive. Everything from the (limited) public literature to riding them personally has me totally persuaded that they can drive.
DeepMind RL/MCTS can succeed in fairly open-ended settings like StarCraft and shit.
Brain/DeepMind still knocks hard. They under-invested in LLMs and remain kind of half-hearted around it because they think it’s a dumbass sideshow because it is a dumbass sideshow.
They train on TPU which costs less than chips made of Rhodium like a rapper’s sunglasses, they fixed the structural limits in TF2 and PyTorch via the Jax ecosystem.
If I ever get interested in making some money again Google is the only FAANG outfit I’d look at.
I think that, if it were true that Waymo cars require human intervention every 1-2 miles (thus requiring 1 operator for every, say, 1-2 cars, probably constantly paying attention while the car is in motion), then it would be fair to say that the cars are not really self driving.
However, if the real number is something like an intervention every 20 or 100 miles, and so an operator is likely passively monitoring dozens of cars, and the cars themselves ask for operator assistance rather than the operator actively monitoring them, then I would agree with you that Waymo has really achieved full self driving and his predicitons on the basic viability have turned out wrong.
I have no idea though which is the case. I would be very interested if there are any reliable resources pointing one way or the other.
Waymo is the best driver I’ve ridden with. Yes it has limited coverage. Maybe humans are intervening, but unless someone can prove that humans are intervening multiple times per ride, “self driving” is here, IMO, as of 2024.
> So he think humans are intervening once every 1-2 miles to train the Waymo
Just to make sure we're applying our rubric fairly and universally: Has anyone else been in an Uber where you wished you were able to intervene in the driving a few times, or at least apply RLHF to the driver?
(In other words: Waymo may be imperfect to the point where corrections are sometimes warranted; that doesn't mean they're not already driving at a superhuman level, for most humans. Just because there is no way for remote advisors to provide better decisions for human drivers doesn't mean that human-driven cars would not benefit from that, if it were available.).
Your objection to him claiming a win on self driving is that you think that we can still define cars as self driving even when humans are operating them? Ok I disagree. If humans are operating them then they simply are not self driving by any sensible definition.
> when he claims to be correct about self driving in the Waymo case. The bar he set is so broad and ambiguous, that probably anything Waymo did, would not qualify as self driving to him
Honestly, back in 2012 or something I was convinced that we would have autonomous driving by now, and by autonomous driving I definitely didn't mean “one company is able to offer autonomous taxi rides is a very limited amount of places with remote operator supervision”, the marketing pitch has always been something along “the car you'll buy will be autonomously driving you to whatever destination you ask for, and you'll be just a passenger in you own car”, and we definitely aren't there at all when all we have is Waymo.
Nonsense. If you spoke about self-driving cars a few decades ago you would have understood it to have meant that you could go to a dealer and buy a car that would drive itself, wherever you might be, without your input as a driver.
No-one would have equated the phrase "we'll have self-driving cars" with "some taxis in a few of US cities"
The Waymo criticisms are absurd to the point of dishonesty. He criticizes a Waymo for... not pulling out fast enough around a truck, or for human criminals vandalizing them? Oh no, once some Waymos did a weird thing where they honked for a while! And a couple times they got stuck over a few million miles! This is an amazingly lame waste of space, and the fact that he does his best to only talk about Tesla instead of Waymo emphasizes how weak his arguments are, particularly in comparison to his earliest predictions. (Obviously only the best self-driving car matters to whether self-driving cars have been created.)
"Nothing ever happens"... until it does, and it seems Brooks's prediction roundups can now be conveniently replaced with a little rock on it with "nothing in AI ever works" written on it without anything of value being lost.
> That being said, we are not on the verge of replacing and eliminating humans in either white collar jobs or blue collar jobs.
Tell that to someone laid off when replaced by some "AI" system.
> Waymo not autonomous enough
It's not clear how often Waymo cars need remote attention, but it's not every 1-2 miles. Customers would notice the vehicle being stopped and stuck during the wait for customer service. There are many videos of people driving in Waymos for hours without any sign of a situation that required remote intervention.
Tesla and Baidu do use remote drivers.
The situations where Waymo cars get stuck are now somewhat obscure cases. Yesterday, the new mayor of SF had two limos double-parked, and a Waymo got stuck behind that. A Waymo got stuck in a parade that hadn't been listed on Muni's street closure list.
> Flying cars
Probably at the 2028 Olympics in Los Angeles. They won't be cost-effective, but it will be a cool demo.
EHang recently put solid state batteries into their flying car and got 48 minutes of flight time, instead of their previous 25 minutes. Ehang is basically a scaled-up quadrotor drone, with 16 motors and props. EHang been flying for years, but not for very long per recharge. Better batteries will help a lot.
> Tell that to someone laid off when replaced by some "AI" system.
What are some good examples? I am very skeptical of anyone losing their jobs to AI. People are getting laid off for various reasons:
- Companies are replacing American tech jobs with foreigners
- Many companies hired more devs than they need
- companies hired many devs during the pandemic, and don't need them anymore
Some companies may claim they are replacing devs with AI. I take it with a grain of salt. I believe some devs were probably replaced by AI, but not a large amount.
I think there may be a lot more layoffs in the future, but AI will probably account for a very small fraction of those.
I think what Waymo's achieved is really impressive, and I like the way they've rolled out (carefully), but there's a lot of non evidence based defense of them in this comment thread. YouTube videos of people driving for hours are textbook survivorship bias. (What about all the videos people made but didn't upload because their drive didn't go perfectly?)
Nobody knows how many times operators intervene, because Waymo hasn't said. It's literally impossible to deduce.
Which means I also agree his estimate could also be wildly wrong too.
Good example of everything that can wrong with a prediction market if left unchecked. Don't like that Waymo broke your prediction? Fine just move your goalposts. Like that prediction came true but on the wrong timeframe? Just move the goal posts.
Glad Polymarket (and other related markets) exist so they can put actual goal posts in place with mechanisms that require certain outcomes in order to finalize on a prediction result.
> Glad Polymarket (and other related markets) exist so
Polymarket is a great way to incentive people into making their predictions happen, with all clandestine tools at their disposal, which is definitely not what you want for your society generally.
It seems to me that the redefined flying cars for extremely wealthy people did happen? eVTOLs are being sold/delivered to the general public. Certainly still pretty rare, as I've never seen one in real life. I'd love to have one but would probably hate a world where everyone has them.
Not really wanting to have this argument a second time in a week (seriously- just look at my past comments instead of replying here as I said all I care to say https://news.ycombinator.com/item?id=42588699), but he is totally wrong about LLMs just looking up answers in their weights- they can correctly answer questions about totally fabricated new scenarios, such as solving simple physics questions that require tracking the location of objects and reporting where they will likely end up based on modeling the interactions involved. If you absolutely must reply that I am wrong at least try it yourself first in a recent model like GPT-4o and post the prompt you tried.
Kobe Bryant basically commuted by helicopter, when it was convenient. It may have even taken off and landed at his house, but probably not exactly at all of his destinations. Is a “flying car” fundamentally that much different?
Yeah, as an NLP researcher I was reading the post with interest until I found that gross oversimplification about LLMs, which has been repeatedly proved wrong. Now I don't trust the comments and predictions on the other fields I know much less about.
I always have a definitional problem with predictions. I mean, it's moot whether a specific prediction is right or wrong as long as it doesn't help us to understand the big picture and the trends.
Take, for example, the prediction about "robots can autonomously navigate all US households". Why all? From the business POV, 80% of the market is "all" in a practical sense, and most people will consider navigation around the home as "solved" if they can do it for the majority of households and with virtually no intervention. Hilarious situations will arise that amuse the folks; video of clumsy robots will flood the internet instead of cats and dogs, but for the business site, it's lucrative enough to produce and sell them en masse.
Another question of interest is how is the trend? What will the approximate cost of such a robot be? How many US households will adopt such a robot by which time, as they have adopted washing machines and dishwashers. Will we see a linear adoption or rather a logistic adoption? These are the more interesting questions than just whether I'm right or wrong.
In reading this I come to wonder if the current advances in "AI" are going to follow the Self Driving Car model. Turns out the 80% is relatively easy to do, but the remaining 20% to get it right is REALLY hard.
Agree, that is why the agent hype is going to bust. Agent means giving AI control. That means critical failure modes and the need of human to constantly oversee agent working.
> Their imaginations were definitely encourage by exponentialism, but in fact all they knew was that when the went from smallish to largish networks following the architectural diagram above, the performance got much better. So the inherent reasoning was that if more made things better then more more would make things more better. Alas for them it appears that this is probably not the case.
I recommend reading Richard Hamming's "The Art of Science and Engineering." Early in the book he presents a simple model of knowledge growth that always leads to an s-curve. The trouble is that on the left, an s-curve looks exponential. We still don't know where we are on the curve with any of the technologies. It is very possible we've already passed the exponential growth phase with some of these technologies. If so, we will need new technologies to move forward to the next s curve.
> Systems which do require remote operations assistance to get full reliability cut into that economic advantage and have a higher burden on their ROI calculations
Technically true but I'm not convinced it matters that much. The reason autonomation took over in manufacturing was not that they could fire the operator entirely, but that one operator could man 8 machines simultaneously instead of just one.
All that verbiage about robotaxis and not a single mention about China, which by all accounts is well ahead of the US in deploying them out on the road. (With a distinctly mixed track record, it must be said, but still.)
I like Rodney Brooks, but I find the way he does these predictions to be very obtuse and subject to a lot of self-congratulatory interpretation. Highlighting something green that is "NET2021" and then saying he was right when something happened or didn't happen, when something related happened in 2024 mean that he predicted it right or wrong, or is everything subject to arbitrary interpretation? Where are the bold predictions? Sounds like a lot of fairly obvious predictions with a lot of wiggle room to determine if right or wrong.
NET2021 means that he predicted that the event would take place on or after 2021, so happening in 2024 satisfies that. Keep in mind these are six-year-old predictions.
Are you wishing that he had tighter confidence intervals?
"NET2021" means "no earlier than 2021". So, if nothing even arguably similar happened until 2024, that sounds like a very correct prediction.
Whether that's worth congratulating him about depends on how obvious it was, but I think you really need to measure "fairly obvious" at the time the prediction is made, not seven years later. A lot of things that seem "fairly obvious" now weren't obvious at all then.
For me this predictions are kind of being aware of how progress can happen based on history, but this will not lead to any breakthrough. I am not in the camp of being skeptic so I still like the hype cycle, they create an environment for people to break the boundaries and sometimes help untested ideas and things to be explored. This might not have happen if there is no hype cycle. I am in the camp of people who are positive as George Bernard Shaw in his 2 quotes:
1. A life spent making mistakes is not only more honorable, but more useful than a life spent doing nothing.
2. The reasonable person adapts themselves to the world: the unreasonable one persists in trying to adapt the world to themself. Therefore all progress depends on the unreasonable person. (Changed man to person as I feel it should be gender neutral)
In hindsight when we look back, everything looks like we anticipated, so predictions are no different some pans out some doesn't. My feeling after reading prediction scorecard is that you need a right balance between risk averse (who are either doubtful or do not have faith things will happen quickly enough) and risk takers (one who is extremely positive) for anything good to happen. Both help humanity to move forward and are necessary part of nature.
It is possible AGI might replace humans in a short term and then new kind of work emerges and humans again find something different. There is always a disruption with new changes and some survive and some can't, even if nothing much happens its worth trying as said in quote 1.
> The billionaire founders of both Virgin Galactic and Blue Origin had faith in the systems they had created. They both personally flew on the first operational flights of their sub-orbital launch systems. They went way beyond simply talking about how great their technology was, they believed in it, and flew in it.
> Let’s hope this tradition continues. Let’s hope the billionaire founder/CEO of SpaceX will be onboard the first crewed flight of Starship to Mars, and that it happens sooner than I expect. We can all cheer for that.
>Individually owned cars can go underground onto a pallet and be whisked underground to another location in a city at more than 100mph.
I'm curious where this idea even came from, not sure who the customer would be, it's a little disappointing he doesn't mention mag-lev trains in a discussion about future rapid transit. I'd much rather ride a smooth mag-lev across town than an underground pallet system.
I don't have a pulse on how far self-driving has come from a tech standpoint, but from an outsider's perspective I'd say it is "achieved" when I can order a self-driving car from an app in all of the top 10 most populated cities in the US (since that's where it is being developed) with as much consistency as uber/lyft. The real final boss for self-driving will be the government red-tape that companies will need to get through. I doubt local governments will be a laissez faire with self-driving as they were with uber being an illegal taxi company.
the final boss will be the first big lawsuit against a manufacturer for liability after someone is killed by a driverless car
Of course, then we will eventually see infrastructure become even more hostile to non-drivers and people will have to sue their own governments for the right to exist in public without paying transport companies. Strong Towns tried to warn us
Does it drive anyone else crazy when an author posts 15,000 words (yes, there are that many in this article) when 1,500 would have more than communicated the relevant information? The length of this article is almost comical.
It's long, so I'm skimming a little and... flying cars. If you don't know why we don't have flying cars, you're not a good engineer.
It really doesn't matter what prestigious lab you ran, as that apparently didn't impart the ability to think critically about engineering problems.
[Hint: Flying takes 10x the energy of driving, and the cost/weight/volume of 1 MJ hasn't changed in close to a hundred years. Flying cars require a 10x energy breakthrough.]
The article is responding to claims by CEOs of car companies, industry and business press, and other hype sources that keep predicting flying cars next year or so. It's predicting that, against this hype, it will not come to pass. Not sure why you've worded your comment in such a way as if the article was hyping up flying cars.
Not to mention, since we do have helicopters, the engineering challange of flying cars is almost entirely unrelated to energy costs (at least for the super rich, the equivalent of, say, a Rolls Royce, not of a Toyota). The thing stopping flying cars from existing is that it is extremely hard to make an easy to pilot flying vehicle, given the numerous degrees of freedom (and potential catastrophic failure modes); and the significantly higher impredictability and variance of the medium (air vs road surface).
Plus, the major problem of noise pollution, which gets to extreme levels for somewhat fundamental reasons (you have to diaplace a whole lot of air to fly; which is very close to having to create sound waves).
So, overall, the energy problem is already fixed, we already have point-to-point flying vehicles usable, and occasionally used, in urban areas, helicopters. Making them safe when operated by a very lightly trained pilot, and silent enough to not wake up a neighborhood, are the real issues that will persist even if we had mini fusion reactors.
Not quite. It's about 3x. It also depends on whether you're talking fixed wing or rotary wings.
A modern car might easily have 130 kW or more, and that's what a Cessna 172 has (around 180 hp). (Sure, a plane cruises at the higher end of that, while a car only uses that much to accelerate and cruises at the lower end of the range - still not a factor of 10x.)
As another datapoint, a Diamond DA40 does around 28 miles per gallon (< 9 litres per 100 km) at 60% power cruise.
The article is not optimistic on flying cars. The prediction is that an expensive flying car could be purchased no earlier than 2036, and notes a strong possibility that it won’t even happen by 2050. Plus states that minor success (aka 0.1% of car sales are flying cars) isn’t going to happen in his lifetime.
The author also expands on this:
> Don’t hold your breath. They are not here. They are not coming soon.
> Nothing has changed. Billions of dollars have been spent on this fantasy of personal flying cars. It is just that, a fantasy, largely fueled by spending by billionaires.
It’s worth actually reading the article before trashing someone’s career and engineering skills!
It is valuable to make predictions about the world, evaluate those predictions, and reflect on the quality of the predictions and what biases skewed those predictions. The key is to refine how one looks at the world.
I don't see that in this article. Largely, I see the author trying to argue that he was right in 2018 rather than trying to take a step back to accurately evaluate their predictions.
[+] [-] sashank_1509|1 year ago|reply
Then I bet Rodney can just fiddle with goal post and say that 3.26 trillion miles were driven in US in 2024, and having a human intervene 1000 miles would mean 3.26 billion interventions, and that this is clearly not self driving. In fact until Waymo disables Internet on all cars and prices it never needs any intervention ever, Rodney can claim he’s right, even then maybe not stopping exactly where Rodney wanted it to, might be proof that self driving doesn’t work.
Next big thing after deep learning prediction is clearly false. LLM is deep learning, scaled up, we are not in any sense looking past deep learning. Rodney I bet wanted it to be symbolic AI, but that is most likely a dead end, and the bitter lesson actually holds. In fact we have been riding this deep learning wave since Alex-Net 2012. OpenAI talked about scaling since 2016 and during that time the naysayers could be very confident and claim we needed something more, but OpenAI went ahead and proved out the scaling hypothesis and passed the language Turing test. We haven’t needed anything more except scale and reasoning has also turned out to be similar. Just an LLM trained to reason, no symbolic merger, not even a search step it seems like.
[+] [-] benreesman|1 year ago|reply
DeepMind RL/MCTS can succeed in fairly open-ended settings like StarCraft and shit.
Brain/DeepMind still knocks hard. They under-invested in LLMs and remain kind of half-hearted around it because they think it’s a dumbass sideshow because it is a dumbass sideshow.
They train on TPU which costs less than chips made of Rhodium like a rapper’s sunglasses, they fixed the structural limits in TF2 and PyTorch via the Jax ecosystem.
If I ever get interested in making some money again Google is the only FAANG outfit I’d look at.
[+] [-] tsimionescu|1 year ago|reply
However, if the real number is something like an intervention every 20 or 100 miles, and so an operator is likely passively monitoring dozens of cars, and the cars themselves ask for operator assistance rather than the operator actively monitoring them, then I would agree with you that Waymo has really achieved full self driving and his predicitons on the basic viability have turned out wrong.
I have no idea though which is the case. I would be very interested if there are any reliable resources pointing one way or the other.
[+] [-] laweijfmvo|1 year ago|reply
[+] [-] khafra|1 year ago|reply
Just to make sure we're applying our rubric fairly and universally: Has anyone else been in an Uber where you wished you were able to intervene in the driving a few times, or at least apply RLHF to the driver?
(In other words: Waymo may be imperfect to the point where corrections are sometimes warranted; that doesn't mean they're not already driving at a superhuman level, for most humans. Just because there is no way for remote advisors to provide better decisions for human drivers doesn't mean that human-driven cars would not benefit from that, if it were available.).
[+] [-] mvdtnz|1 year ago|reply
[+] [-] littlestymaar|1 year ago|reply
Honestly, back in 2012 or something I was convinced that we would have autonomous driving by now, and by autonomous driving I definitely didn't mean “one company is able to offer autonomous taxi rides is a very limited amount of places with remote operator supervision”, the marketing pitch has always been something along “the car you'll buy will be autonomously driving you to whatever destination you ask for, and you'll be just a passenger in you own car”, and we definitely aren't there at all when all we have is Waymo.
[+] [-] 4ndrewl|1 year ago|reply
No-one would have equated the phrase "we'll have self-driving cars" with "some taxis in a few of US cities"
[+] [-] gwern|1 year ago|reply
"Nothing ever happens"... until it does, and it seems Brooks's prediction roundups can now be conveniently replaced with a little rock on it with "nothing in AI ever works" written on it without anything of value being lost.
[+] [-] Animats|1 year ago|reply
Tell that to someone laid off when replaced by some "AI" system.
> Waymo not autonomous enough
It's not clear how often Waymo cars need remote attention, but it's not every 1-2 miles. Customers would notice the vehicle being stopped and stuck during the wait for customer service. There are many videos of people driving in Waymos for hours without any sign of a situation that required remote intervention.
Tesla and Baidu do use remote drivers.
The situations where Waymo cars get stuck are now somewhat obscure cases. Yesterday, the new mayor of SF had two limos double-parked, and a Waymo got stuck behind that. A Waymo got stuck in a parade that hadn't been listed on Muni's street closure list.
> Flying cars
Probably at the 2028 Olympics in Los Angeles. They won't be cost-effective, but it will be a cool demo. EHang recently put solid state batteries into their flying car and got 48 minutes of flight time, instead of their previous 25 minutes. Ehang is basically a scaled-up quadrotor drone, with 16 motors and props. EHang been flying for years, but not for very long per recharge. Better batteries will help a lot.
[1] https://aerospaceamerica.aiaa.org/electric-air-taxi-flights-...
[+] [-] shlomo_z|1 year ago|reply
Some companies may claim they are replacing devs with AI. I take it with a grain of salt. I believe some devs were probably replaced by AI, but not a large amount.
I think there may be a lot more layoffs in the future, but AI will probably account for a very small fraction of those.
[+] [-] davedx|1 year ago|reply
Nobody knows how many times operators intervene, because Waymo hasn't said. It's literally impossible to deduce.
Which means I also agree his estimate could also be wildly wrong too.
[+] [-] skywhopper|1 year ago|reply
[+] [-] brcmthrowaway|1 year ago|reply
[+] [-] coderintherye|1 year ago|reply
Glad Polymarket (and other related markets) exist so they can put actual goal posts in place with mechanisms that require certain outcomes in order to finalize on a prediction result.
[+] [-] sashank_1509|1 year ago|reply
[+] [-] HDThoreaun|1 year ago|reply
[+] [-] littlestymaar|1 year ago|reply
Polymarket is a great way to incentive people into making their predictions happen, with all clandestine tools at their disposal, which is definitely not what you want for your society generally.
[+] [-] _yb2s|1 year ago|reply
Not really wanting to have this argument a second time in a week (seriously- just look at my past comments instead of replying here as I said all I care to say https://news.ycombinator.com/item?id=42588699), but he is totally wrong about LLMs just looking up answers in their weights- they can correctly answer questions about totally fabricated new scenarios, such as solving simple physics questions that require tracking the location of objects and reporting where they will likely end up based on modeling the interactions involved. If you absolutely must reply that I am wrong at least try it yourself first in a recent model like GPT-4o and post the prompt you tried.
[+] [-] laweijfmvo|1 year ago|reply
[+] [-] Al-Khwarizmi|1 year ago|reply
[+] [-] sinuhe69|1 year ago|reply
Take, for example, the prediction about "robots can autonomously navigate all US households". Why all? From the business POV, 80% of the market is "all" in a practical sense, and most people will consider navigation around the home as "solved" if they can do it for the majority of households and with virtually no intervention. Hilarious situations will arise that amuse the folks; video of clumsy robots will flood the internet instead of cats and dogs, but for the business site, it's lucrative enough to produce and sell them en masse. Another question of interest is how is the trend? What will the approximate cost of such a robot be? How many US households will adopt such a robot by which time, as they have adopted washing machines and dishwashers. Will we see a linear adoption or rather a logistic adoption? These are the more interesting questions than just whether I'm right or wrong.
[+] [-] SavageBeast|1 year ago|reply
[+] [-] brisky|1 year ago|reply
[+] [-] thefaux|1 year ago|reply
I recommend reading Richard Hamming's "The Art of Science and Engineering." Early in the book he presents a simple model of knowledge growth that always leads to an s-curve. The trouble is that on the left, an s-curve looks exponential. We still don't know where we are on the curve with any of the technologies. It is very possible we've already passed the exponential growth phase with some of these technologies. If so, we will need new technologies to move forward to the next s curve.
[+] [-] kqr|1 year ago|reply
Technically true but I'm not convinced it matters that much. The reason autonomation took over in manufacturing was not that they could fire the operator entirely, but that one operator could man 8 machines simultaneously instead of just one.
[+] [-] teractiveodular|1 year ago|reply
[+] [-] rexreed|1 year ago|reply
[+] [-] gcr|1 year ago|reply
Are you wishing that he had tighter confidence intervals?
[+] [-] kragen|1 year ago|reply
Whether that's worth congratulating him about depends on how obvious it was, but I think you really need to measure "fairly obvious" at the time the prediction is made, not seven years later. A lot of things that seem "fairly obvious" now weren't obvious at all then.
[+] [-] vikrantrathore|1 year ago|reply
It is possible AGI might replace humans in a short term and then new kind of work emerges and humans again find something different. There is always a disruption with new changes and some survive and some can't, even if nothing much happens its worth trying as said in quote 1.
[+] [-] sgt101|1 year ago|reply
How much money has been burned on robo-taxis which could have been spent on incubators for kids.
[+] [-] kookamamie|1 year ago|reply
[+] [-] FabHK|1 year ago|reply
> Let’s Continue a Noble Tradition!
> The billionaire founders of both Virgin Galactic and Blue Origin had faith in the systems they had created. They both personally flew on the first operational flights of their sub-orbital launch systems. They went way beyond simply talking about how great their technology was, they believed in it, and flew in it.
> Let’s hope this tradition continues. Let’s hope the billionaire founder/CEO of SpaceX will be onboard the first crewed flight of Starship to Mars, and that it happens sooner than I expect. We can all cheer for that.
[+] [-] tontonius|1 year ago|reply
[+] [-] dang|1 year ago|reply
Rodney Brooks Predictions Scorecard - https://news.ycombinator.com/item?id=34477124 - Jan 2023 (41 comments)
Predictions Scorecard, 2021 January 01 - https://news.ycombinator.com/item?id=25706436 - Jan 2021 (12 comments)
Predictions Scorecard - https://news.ycombinator.com/item?id=18889719 - Jan 2019 (4 comments)
[+] [-] barnabyjones|1 year ago|reply
I'm curious where this idea even came from, not sure who the customer would be, it's a little disappointing he doesn't mention mag-lev trains in a discussion about future rapid transit. I'd much rather ride a smooth mag-lev across town than an underground pallet system.
[+] [-] qwertox|1 year ago|reply
[+] [-] kweingar|1 year ago|reply
[+] [-] qznc|1 year ago|reply
[+] [-] skizm|1 year ago|reply
[+] [-] metalliqaz|1 year ago|reply
Of course, then we will eventually see infrastructure become even more hostile to non-drivers and people will have to sue their own governments for the right to exist in public without paying transport companies. Strong Towns tried to warn us
[+] [-] IAmGraydon|1 year ago|reply
[+] [-] ynniv|1 year ago|reply
It really doesn't matter what prestigious lab you ran, as that apparently didn't impart the ability to think critically about engineering problems.
[Hint: Flying takes 10x the energy of driving, and the cost/weight/volume of 1 MJ hasn't changed in close to a hundred years. Flying cars require a 10x energy breakthrough.]
[+] [-] tsimionescu|1 year ago|reply
Not to mention, since we do have helicopters, the engineering challange of flying cars is almost entirely unrelated to energy costs (at least for the super rich, the equivalent of, say, a Rolls Royce, not of a Toyota). The thing stopping flying cars from existing is that it is extremely hard to make an easy to pilot flying vehicle, given the numerous degrees of freedom (and potential catastrophic failure modes); and the significantly higher impredictability and variance of the medium (air vs road surface).
Plus, the major problem of noise pollution, which gets to extreme levels for somewhat fundamental reasons (you have to diaplace a whole lot of air to fly; which is very close to having to create sound waves).
So, overall, the energy problem is already fixed, we already have point-to-point flying vehicles usable, and occasionally used, in urban areas, helicopters. Making them safe when operated by a very lightly trained pilot, and silent enough to not wake up a neighborhood, are the real issues that will persist even if we had mini fusion reactors.
[+] [-] FabHK|1 year ago|reply
A modern car might easily have 130 kW or more, and that's what a Cessna 172 has (around 180 hp). (Sure, a plane cruises at the higher end of that, while a car only uses that much to accelerate and cruises at the lower end of the range - still not a factor of 10x.)
As another datapoint, a Diamond DA40 does around 28 miles per gallon (< 9 litres per 100 km) at 60% power cruise.
[+] [-] anon7000|1 year ago|reply
The author also expands on this:
> Don’t hold your breath. They are not here. They are not coming soon.
> Nothing has changed. Billions of dollars have been spent on this fantasy of personal flying cars. It is just that, a fantasy, largely fueled by spending by billionaires.
It’s worth actually reading the article before trashing someone’s career and engineering skills!
[+] [-] bhelkey|1 year ago|reply
I don't see that in this article. Largely, I see the author trying to argue that he was right in 2018 rather than trying to take a step back to accurately evaluate their predictions.