Interesting. Watch at 1/3 speed or so to see it in real time. (Self-driving car videos tend to be published sped up, so you don't see the mistakes.)
The key part of this is, how well does it box everything in the environment? That's the first level of data reduction and the one that determines whether the vehicle hits things. It's doing OK. It's not perfect; it often misses short objects, such as dogs, backpacks on the sidewalk, and once a small child in a group about to cross a street. Fireplugs seem to be misclassified as people frequently. Fixed obstacles are represented as many rectangular blocks, which is fine, and it doesn't seem to be missing important ones. No potholes seen; not clear how well it profiles the pavement. This part of the system is mostly LIDAR and geometry, with a bit of classifier. Again, this is the part of the system essential to not hitting stuff.
This is a reasonable approach. Looks like Google's video from 2017. It's way better than the "dump the video into a neural net and get out steering commands" approach, or the "lane following plus anti-rear-ending, and pretend it's self driving" approach, or the 2D view plane boxing seen from some of the early systems.
Predicting what other road users are going to do is the next step. Once you have the world boxed, you're working with a manageable amount of data. A lot of what happens is still determined by geometry. Can a bike fit in that space? Can the car that's backing up get into the parking space without being obstructed by our vehicle? Those are geometry questions.
Only after that does guessing about human intent really become an issue.
It really really bothers me that these folks are using a live city with real, non-volunteer test subjects of all ages (little kids and old folks use public streets) as a test bed for their massive car-shaped robots.
It's bad enough that people are driving cars all over the place, car collisions have killed more Americans than all the wars we've fought put together.
I'm one of those people who say, "Self-driving cars can't happen soon enough." But I don't think that justifies e.g. killing Elaine Herzberg.
Ask yourself this, why start with cars? Why not make a self-driving golf cart? Make it out of nerf (soft foam) and program it to never go so fast that it can't brake in time to prevent collision.
Testing these heavy, fast, buggy robots in crowds of people is extremely irresponsible.
There is a different perspective that you could use (and I’m not necessarily advocating for it; hear me out):
Human driven cars are dangerous to the tune of ~36,000 deaths per year. Every year without the implementation of full self driving we pay some large percentage of that number in lives. Self driving cars won’t make it out of the lab without real driving on real roads in real scenarios. Taking appropriate precautions (a human safety driver, maybe two) and testing in the real world might save more lives overall than keeping the vehicles in a more lab-like setting for longer, and missing some of the complexity of the real thing.
I think you're missing the narrative that the self-driving industry is pushing here. They "solved the problem" and their fleets driving around "autonomously" is being done in order to demonstrate this to the public. A golf cart is obviously unsuitable for that purpose.
I think this narrative has run out of steam at this point, by the way. Waymo's valuation has gone from $175B to $105B to $30B since 2018. Zoox specifically is now laying off engineers.
You can't learn to operate in environments you don't train in. It would be great if we had a solution to the out-of-distribution inference/reward problem, but I don't think it really exists.
I'm firmly in the "Perfect for freight, questionable value for consumers" camp WRT autonomous cars. I also think it's irresponsible to do this but the reality is, they are doing all the socially "appropriate" things, like get approval from the city.
> It's bad enough that people are driving cars all over the place, car collisions have killed more Americans than all the wars we've fought put together.
Do you want this to stop? Then we're going to have this people test their self-driving cars in a real environment. The more we delay this, the more people are dying because of car accidents.
Those were already made years ago at the start of SDC innovation. A few companies are way beyond the worst human drivers now, there's already a massive amount of motor vehicle death caused by intoxication that we should worry about. Not fantasy robodeaths that we can count on one hand.
What is the general view on Zoox's progress relative to other non-waymo playes. Such as Argo, Aurora and Cruise. There is the widely reported disengagement per mile, but most robotics people know it is just smoke and mirrors meant to make the regulators go away (disclosure, studied/researched robotics in grad school).
The general consensus among my AV friends (who work at a bunch of different companies) is that their AV driving stack is really good, but obviously not perfect.
I have no idea about their business model and how COVID affects that, though.
Yawn. Good lane markings, no rain/snow or other bad weather, perfect road surfaces.
Just like all other self-driving demos. I'd like to see a demo like this on snow covered roads, with no lane markings visible. I think that would tell a lot more about the system's ability to deal with an imperfect world.
Well, universality is not necessarily a useful end goal. Lyft is a successful company that doesn't operate even in Canada. A solution that works only in coastal California may well be sufficient.
Lots of things come to the Bay Area and Los Angeles before anywhere else. Partly that's because coastal California is an innovation hotbed. Partly because it's a single large rich market. Since one of these that succeeds entirely in the safe parts of California would be an incredible game-changer on its own (door-to-door small-group spikable public transit!), it's still amazingly exciting.
And while lots of Americans view many things as unchangeable, that's not the case in many other places. In China, if you were to talk to public planners about how autonomous vehicles will handle detours, they'll just say, "Oh, we'll use transmitters to tell you. We can sign the transmitters so you know they're trustworthy." Everything about the universe is mutable.
Yep, no ice road truckers will be autonomous in the next year, and that's okay.
Road infrastructure is going to change, by necessity. It seems like self-driving technology is as good as it can be, given current circumstances. There's no way to get self-driving cars to airplane safety numbers without on/near road devices/reflectors/computer-readable signage/etc, edge compute, better pedestrian understanding of what the cars are seeing and are capable of reacting to, and probably much more. It's time to give it the infrastructural boost it needs to become an everyday reality. We need to put sensors in the road when they're re-paved, transmitters in signs with solar chargers when they're replaced, LIDAR reflectors on the road sides and in medians, start offering clothing/accessories with transmitters or reflectors that clearly identify people as pedestrians...
> Yawn. Good lane markings, no rain/snow or other bad weather, perfect road surfaces.
Ok... What if I were to tell you that there is a solution to this?
The solution to this, is simply "dont drive in those conditions".
A self driving car can't get in a wreck that is caused by snowy roads, if it simply doesn't drive in the snow.
Self driving, during perfect conditions is still extremely valuable, because it turns out that there is a whole lot of driving that is in perfect conditions.
So, you would do things like prevent the taxis from running, if there is any chance of rain at all. I am sure that there are lots of places where rain is not an issue, and rain could be predicted ahead of time. Not everywhere. But still in many places.
That's the good old Pareto principle for you: the last few percent are going to take a lot more effort than the first 95%.
More to the point, this falls into the category of safety-critical systems, with the added wrinkle of potentially being used daily by millions of people. Unlike many domains where software is applied, 80% of the way there doesn't cut it, nor does 95% or 99% or even 99.9%.
(Leaving aside the fact that, for all of us not actively engaged in autonomous vehicle R&D, we likely have absolutely no idea how close we are to success here, or even what all the relevant goalposts would be.)
Possibly for driving in cities and highways on clear days, but we are nowhere close to having autonomous vehicles even match human drivers in 100% of possible/likely driving circumstances and road/weather conditions. That last few percent is the highest hurdle.
All in all I'm quite impressed with the demonstration. It was way more thorough than previous videos I've seen. The main things the car is failing at from what I see are the hard things: Object permanence and ad-hoc reasoning. So no surprises.
Regarding object permanence: I was impressed overall with their detection. Still, you could see kids walking close to parents blink in and out of awareness of the car. Now I'm not saying humans are very good at tracking a multitude of actors. So at some point the machines will be "good enough". But that point seems way off when significant objects like kids can just disappear from awareness when they pass behind a stroller.
And about the ad-hoc reasoning: They have the whole city mapped out! Including traffic lights and turn restrictions. I'm not even clear whether they try to detect the signs at all. I'd assume that they have an operations center that hot-patches the map with everything cropping up during the day. So the cars would send in unexpected changes to the road and they would classify those changes and patch the map. Meaning the car is tethered to that feed and not autonomous in the strictest sense. Sure, such a center would be marginal cost given a large enough fleet. Still it's a subscription you'd need for your own robocar.
They mention a lot of things they are prepared for. And I can't help but think "oh they're really good" when they say "detect backed up lanes" or "creep into intersections". But that always leaves the question what happens when they're not prepared for something. When the rules don't fit. Can the car go over a curb if the situation warrants it? Does it back out of a blocked off section? Is it even able to weigh whether backing out is an option at this point?
so I'd like to see a "what we're currently stuck at" video. But I understand one can't very well attract investors with such a video.
I agree with a significant amount of your point, but with regard to object permanence, I would guess that they have prediction algorithms that don't only rely on the current-time perception, so if something blips out of sight for a second the system will still infer/predict it's existence (for a time - obviously if something is hidden for a long time it won't continue to not trust perception).
> Handling yellow lights properly, involves us having to predict how long they will remain yellow for
No. That isn't how yellow lights work in the US. If the light turns yellow and you have enough space/time to make a safe stop you do it. There's no need to predict the remaining time on yellow phase. We don't need robot cars bending these rules.
Not sure why you're being downvoted, but I think this is a classic example of why self-driving is so hard. They're not bending the rules, just copying what humans do. We also predict how long a light will be yellow for, but do it naturally (if you just saw it turn from green, or it was yellow as soon as it was in your line of sight).
In Delaware on Route 1 if you follow this advice you are likely to get rear ended. They have traffic lights on a 50mph route that stay in yellow for a long time.
I often find myself slowing down to a stop then awkwardly realizing I’m stopped with multiple seconds of yellow remaining and drivers honking behind me.
No. That's what the law says but not how you drive.
Suppose your 4 seconds from a yellow light traveling at a high speed. You can slam on your breaks and make a very abrupt stop, or you can cruise through that light and continue on your way.
If the light is about to turn red you should probably slam on your breaks, because you risk being t-boned in the intersection.
If you have time to get through the yellow light/before the cross light turns green, you should keep going because slamming on your breaks is mildly dangerous.
The law isn't nuanced enough to understand this, with good reason. You don't want to make a bad call about the safest action made in good faith illegal.
This is really cool, but the environment is also really simple and I think we're definetly at least 15+ years out before self driving cars can handle somewhat challenging situations as well as humans.
Just try to put one of this vehicles in a situation with varying road width, no markings, snow with no sticks to mark the edges so you really have to pay attention to where the road actually is. What would this do if you meet a car on such a road? Try to figure out who should go back, and maybe go back to the latest plase where its wide enough? Do random tests to check for grip every now and then? It also needs to know whether the road is salted, understand if the salt is working and so on and on and on...
"we're definetly at least 15+ years out". Similar statements were made about Go the year it was solved. AV is a vastly harder problem and requires new techniques to get there, but AI can progress can happen any time.
This is super cool! I'm wondering how the car would react if:
* someone parked on the side opens their door too quickly and collide with the zoox car.
* there is a car not moving in front, and the zoox car cannot see what's in the other lane without backing to get more insight
I'm also super impressed at how it can understand where the lane is in this 5 lane intersection that crosses a tram line. Even I couldn't understand where I would have had to drive!
>I'm also super impressed at how it can understand where the lane is in this 5 lane intersection that crosses a tram line. Even I couldn't understand where I would have had to drive!
This is actually one of those things that's easier for an AV than a human since they have localization and full lane maps of the city.
The two turns (one left and one right-on-red) leading up to getting to Market Street in the latter half of the video struck me as odd; the left turn looked like a bit of a lane sweep, and the right-on-red looked dubious (is it legal to turn right on red if you're not in the far-right lane?).
SF intersections are hard, though, and the computer seemed to handle them about as well as I would've.
One thing they only mentioned casually towards the end is that they mapped the city beforehand. So the car is starting from a position where it knows all the intersections.
I think background music is important. Especially on such long explanatory videos but often it becomes a reason for me to turn off a video if the music becomes to aggressive.
Besides the sheer complexity of situations described in this video, I wonder how these vehicles will deal with differences in traffic rules in different countries (when even road signs can be different).
It sounds like it currently "cheats" a bit by already having driving rules, maps (including signs), etc. baked in; it'd be akin to a human driver memorizing the California Vehicle Code and a map of San Francisco word-for-word and lane-for-lane.
Presumably Zoox deployments in other cities would work similarly, "cheating" by baking in local driving rules and road maps. A consumer-owned self-driving car would likely be able to do something similar by downloading the local ruleset and maps on the fly, assuming one exists.
Are you referring to the pedestrian who's almost crossed the crosswalk on the left side of the screen? This is still a proper yield as far as I can see. The car just enters the intersection before that person has finished crossing.
I mean it's not 100% of the way there. Plus human drivers do that all the time and MUCH worse things. I'm talking from the point of view as a frequent Uber/Lyft passenger.
This demo is not informative as to the readiness for scalable L4 deployment, for which it would be necessary to focus on the breadth/accuracy of perception features under the hood of intent prediction and what happens at the tail end with arbitrary situations that occur in urban driving environments.
Cheap criticism: the video starts with (I paraphrase) "This is 1 hour of driving", the last thing I expected after the fade-out/in was to see a man with a weird shirt... and then I notice the video is about 27 minutes long.
Edit to add: After that I started watching it, it's actually a video of an impressive AI.
Animats|5 years ago
The key part of this is, how well does it box everything in the environment? That's the first level of data reduction and the one that determines whether the vehicle hits things. It's doing OK. It's not perfect; it often misses short objects, such as dogs, backpacks on the sidewalk, and once a small child in a group about to cross a street. Fireplugs seem to be misclassified as people frequently. Fixed obstacles are represented as many rectangular blocks, which is fine, and it doesn't seem to be missing important ones. No potholes seen; not clear how well it profiles the pavement. This part of the system is mostly LIDAR and geometry, with a bit of classifier. Again, this is the part of the system essential to not hitting stuff.
This is a reasonable approach. Looks like Google's video from 2017. It's way better than the "dump the video into a neural net and get out steering commands" approach, or the "lane following plus anti-rear-ending, and pretend it's self driving" approach, or the 2D view plane boxing seen from some of the early systems.
Predicting what other road users are going to do is the next step. Once you have the world boxed, you're working with a manageable amount of data. A lot of what happens is still determined by geometry. Can a bike fit in that space? Can the car that's backing up get into the parking space without being obstructed by our vehicle? Those are geometry questions.
Only after that does guessing about human intent really become an issue.
emmelaich|5 years ago
The note in the top-right says its 2x.
carapace|5 years ago
It's bad enough that people are driving cars all over the place, car collisions have killed more Americans than all the wars we've fought put together.
I'm one of those people who say, "Self-driving cars can't happen soon enough." But I don't think that justifies e.g. killing Elaine Herzberg.
Ask yourself this, why start with cars? Why not make a self-driving golf cart? Make it out of nerf (soft foam) and program it to never go so fast that it can't brake in time to prevent collision.
Testing these heavy, fast, buggy robots in crowds of people is extremely irresponsible.
torpfactory|5 years ago
Human driven cars are dangerous to the tune of ~36,000 deaths per year. Every year without the implementation of full self driving we pay some large percentage of that number in lives. Self driving cars won’t make it out of the lab without real driving on real roads in real scenarios. Taking appropriate precautions (a human safety driver, maybe two) and testing in the real world might save more lives overall than keeping the vehicles in a more lab-like setting for longer, and missing some of the complexity of the real thing.
anigbrowl|5 years ago
I can live with it. Human drivers annoy me so much that throwing the dice on autonomous cars is not a big stressor to me.
KKKKkkkk1|5 years ago
I think this narrative has run out of steam at this point, by the way. Waymo's valuation has gone from $175B to $105B to $30B since 2018. Zoox specifically is now laying off engineers.
AndrewKemendo|5 years ago
I'm firmly in the "Perfect for freight, questionable value for consumers" camp WRT autonomous cars. I also think it's irresponsible to do this but the reality is, they are doing all the socially "appropriate" things, like get approval from the city.
baby|5 years ago
Do you want this to stop? Then we're going to have this people test their self-driving cars in a real environment. The more we delay this, the more people are dying because of car accidents.
Dumblydorr|5 years ago
wtvanhest|5 years ago
I forgot how annoying it was.
lonelappde|5 years ago
z3rgl1ng|5 years ago
[deleted]
the-dude|5 years ago
[deleted]
stale2002|5 years ago
[deleted]
xiaolingxiao|5 years ago
IntenseChaos|5 years ago
I have no idea about their business model and how COVID affects that, though.
Gaelan|5 years ago
Are you saying that the numbers are inaccurately reported, or accurately reported but just don't tell the whole story?
vardump|5 years ago
Just like all other self-driving demos. I'd like to see a demo like this on snow covered roads, with no lane markings visible. I think that would tell a lot more about the system's ability to deal with an imperfect world.
renewiltord|5 years ago
Lots of things come to the Bay Area and Los Angeles before anywhere else. Partly that's because coastal California is an innovation hotbed. Partly because it's a single large rich market. Since one of these that succeeds entirely in the safe parts of California would be an incredible game-changer on its own (door-to-door small-group spikable public transit!), it's still amazingly exciting.
And while lots of Americans view many things as unchangeable, that's not the case in many other places. In China, if you were to talk to public planners about how autonomous vehicles will handle detours, they'll just say, "Oh, we'll use transmitters to tell you. We can sign the transmitters so you know they're trustworthy." Everything about the universe is mutable.
Yep, no ice road truckers will be autonomous in the next year, and that's okay.
beams_of_light|5 years ago
chx|5 years ago
chrisseaton|5 years ago
But humans can't drive well in those situations either. Why are you asking for something better than humans can do?
vladislav|5 years ago
stale2002|5 years ago
Ok... What if I were to tell you that there is a solution to this?
The solution to this, is simply "dont drive in those conditions".
A self driving car can't get in a wreck that is caused by snowy roads, if it simply doesn't drive in the snow.
Self driving, during perfect conditions is still extremely valuable, because it turns out that there is a whole lot of driving that is in perfect conditions.
So, you would do things like prevent the taxis from running, if there is any chance of rain at all. I am sure that there are lots of places where rain is not an issue, and rain could be predicted ahead of time. Not everywhere. But still in many places.
baby|5 years ago
did you see that 5 lane intersection going over a tram lane? I myself had no idea where I would have driven there.
spacehome|5 years ago
taneq|5 years ago
dehrmann|5 years ago
candu|5 years ago
More to the point, this falls into the category of safety-critical systems, with the added wrinkle of potentially being used daily by millions of people. Unlike many domains where software is applied, 80% of the way there doesn't cut it, nor does 95% or 99% or even 99.9%.
(Leaving aside the fact that, for all of us not actively engaged in autonomous vehicle R&D, we likely have absolutely no idea how close we are to success here, or even what all the relevant goalposts would be.)
phkahler|5 years ago
Fezzik|5 years ago
vladislav|5 years ago
lolc|5 years ago
Regarding object permanence: I was impressed overall with their detection. Still, you could see kids walking close to parents blink in and out of awareness of the car. Now I'm not saying humans are very good at tracking a multitude of actors. So at some point the machines will be "good enough". But that point seems way off when significant objects like kids can just disappear from awareness when they pass behind a stroller.
And about the ad-hoc reasoning: They have the whole city mapped out! Including traffic lights and turn restrictions. I'm not even clear whether they try to detect the signs at all. I'd assume that they have an operations center that hot-patches the map with everything cropping up during the day. So the cars would send in unexpected changes to the road and they would classify those changes and patch the map. Meaning the car is tethered to that feed and not autonomous in the strictest sense. Sure, such a center would be marginal cost given a large enough fleet. Still it's a subscription you'd need for your own robocar.
They mention a lot of things they are prepared for. And I can't help but think "oh they're really good" when they say "detect backed up lanes" or "creep into intersections". But that always leaves the question what happens when they're not prepared for something. When the rules don't fit. Can the car go over a curb if the situation warrants it? Does it back out of a blocked off section? Is it even able to weigh whether backing out is an option at this point?
so I'd like to see a "what we're currently stuck at" video. But I understand one can't very well attract investors with such a video.
idavidrein|5 years ago
kevin_thibedeau|5 years ago
No. That isn't how yellow lights work in the US. If the light turns yellow and you have enough space/time to make a safe stop you do it. There's no need to predict the remaining time on yellow phase. We don't need robot cars bending these rules.
travisporter|5 years ago
rrdharan|5 years ago
I often find myself slowing down to a stop then awkwardly realizing I’m stopped with multiple seconds of yellow remaining and drivers honking behind me.
Maybe my brakes (or reflexes) are just too good?
gpm|5 years ago
Suppose your 4 seconds from a yellow light traveling at a high speed. You can slam on your breaks and make a very abrupt stop, or you can cruise through that light and continue on your way.
If the light is about to turn red you should probably slam on your breaks, because you risk being t-boned in the intersection.
If you have time to get through the yellow light/before the cross light turns green, you should keep going because slamming on your breaks is mildly dangerous.
The law isn't nuanced enough to understand this, with good reason. You don't want to make a bad call about the safest action made in good faith illegal.
NightlyDev|5 years ago
Just try to put one of this vehicles in a situation with varying road width, no markings, snow with no sticks to mark the edges so you really have to pay attention to where the road actually is. What would this do if you meet a car on such a road? Try to figure out who should go back, and maybe go back to the latest plase where its wide enough? Do random tests to check for grip every now and then? It also needs to know whether the road is salted, understand if the salt is working and so on and on and on...
vladislav|5 years ago
baby|5 years ago
* someone parked on the side opens their door too quickly and collide with the zoox car.
* there is a car not moving in front, and the zoox car cannot see what's in the other lane without backing to get more insight
I'm also super impressed at how it can understand where the lane is in this 5 lane intersection that crosses a tram line. Even I couldn't understand where I would have had to drive!
wutbrodo|5 years ago
This is actually one of those things that's easier for an AV than a human since they have localization and full lane maps of the city.
yellowapple|5 years ago
SF intersections are hard, though, and the computer seemed to handle them about as well as I would've.
lolc|5 years ago
jasonv|5 years ago
Krasnol|5 years ago
I think background music is important. Especially on such long explanatory videos but often it becomes a reason for me to turn off a video if the music becomes to aggressive.
dmitriid|5 years ago
yellowapple|5 years ago
Presumably Zoox deployments in other cities would work similarly, "cheating" by baking in local driving rules and road maps. A consumer-owned self-driving car would likely be able to do something similar by downloading the local ruleset and maps on the fly, assuming one exists.
databus|5 years ago
chrisseaton|5 years ago
unknown|5 years ago
[deleted]
anonymous_car|5 years ago
[deleted]
m0zg|5 years ago
vladislav|5 years ago
stefan_|5 years ago
Companies actually put this kind of footage up without ever reviewing it?
hrishid|5 years ago
chrischen|5 years ago
dmitriid|5 years ago
You can see it on the top right camera.
vladislav|5 years ago
anonymous_car|5 years ago
unknown|5 years ago
[deleted]
netsharc|5 years ago
Edit to add: After that I started watching it, it's actually a video of an impressive AI.
ygra|5 years ago