Cruise Automation handling double-parked cars with LIDAR.[1] They show the scan lines and some of the path planning. Busy city streets, lots of obstacles.
Waymo handling city traffic with LIDAR.[2] They show the scan lines and some of the path planning. Busy city streets, lots of obstacles.
Tesla self-driving demo, April 2019.[3] They show their display which puts pictures of cars and trucks on screen. No difficult obstacles are encountered. Recorded in the Palo Alto hills and on I-280 on a very quiet day. The only time it does anything at all hard is when it has to make a left turn from I-280 south onto Page Mill, where the through traffic does not stop. [3] Look at the display. Where's the cross traffic info?
Tesla's 2016 self driving video [5] is now known to have been made by trying over and over until they got a successful run with no human intervention. The 2019 demo looks similar. Although Tesla said they would, they never actually let reporters ride in the cars in full self driving mode.
> Look at the display. Where's the cross traffic info"
Tesla's display does not render all of the data that the computer knows about.
Additionally this article is assuming the camera based solution for Tesla will be single-camera. Last I checked the actual solution is going to be stereo vision of multiple cameras (think one on each side of windshield) and using ML to combine that data. The Model 3 does not have that capability though because its three cameras are center mounted.
It’s always better to have multiple sensor modalities available.
This is the main takeaway. Unsurprising but interesting nonetheless. I'm working in the field and it confirms my experience.
However they have a big bias that need to be pointed out:
[...] we must be able to annotate this data at extremely high accuracy levels or the perception system’s performance will begin to regress.
Since Scale has a suite of data labeling products built for AV developers, [...]
Garbage in, garbage out; yes annotation quality matters. But they're neglecting very promising approaches that allow to leverage non-annotated datasets (typically standard rgb images) to train models, for example self-supervised learning from video. A great demonstration of the usefulness of self-supervision is monocular depth estimation: taking consecutive frames (2D images) we can estimate per pixel depth and camera ego-motion by training to wrap previous frames into future ones. The result is a model capable of predicting depth on individual 2D frames. See this paper [1][2] for example.
By using this kind of approach, we can lower the need for precisely annotated data.
> taking consecutive frames (2D images) we can estimate per pixel depth
Yeah, I find it odd that they're bringing up Elon's statement about LiDAR, but then completely ignore that they spoke about creating 3D models based on video. They even showed [0] how good of a 3d model they could create based on dat from their cameras. So they could just as well annotate in 3D.
This completely neglects the fact that humans can build near perfect 3D representations of the world with 2D images stitched together with the parallax neural nets in our brain. This blogpost briefly mentions it in one line as a throwaway and says you'd need extremely high resolution cameras?? Doesn't make sense at all. Two cameras of any resolution spaced a regular distance apart should be able to build a better parallax 3D model than any one camera alone.
The first thing we need to remember is the self driving doesn't work like our brain. If they do then we don't need to train them with billions of images. So the main problem is not just building the 3d models. For example we don't crash into the car because we never seen that car model or that kind of vehicle before. Check https://cdn.technologyreview.com/i/images/bikeedgecasepredic... we never think that there is a bike infront of us.
Humans do lot more than just identifying an image or doing 3d reconstruction. We have context about the roads, we constantly predict the movement of other cars, we do know how to react based on the situation and most importantly we are not fooled by simple image occlusions. Essentially we have a gigantic correlation engine that takes decision based on comprehending different things happening on the road.
The AI algorithms we teach does not work in the same way as we do. They overly depend on the identifying the image. Lidar provides another signal to the system. It provides redundancy and allows the system to take the right decision. Take the above linked image for an example.
We may not need a lidar once the technology matures but at this stage it is a pretty important redundant system.
The human brain is horrible at building truly accurate 3D representations of the world. Our mental maps are constantly missing a magnitude of details while tricking us and creating approximations to fill in the blanks.
Easy examples of this are optical illusions, ghosts, and ufos. There is also "selective attention tests" where a majority of people miss glaringly obvious events right in front of them, when they're focusing on something else. Regular people also tend to bump into things, spill things, and trip, even when going 3 miles an hour (walking speed).
We learn objects representations by interacting with them over years in a multi modal fashion. Take for example a simple drinking glass: we know its material properties (it is transparent, solid, can hold liquids), its typical position (stay on a tabletop, upright with the open side on top), its usage (grab it with a hand and bring to mouth)...
We also make heavy use of the time dimension, as over a few seconds we see the same objects from different view points and possibly in different states.
Only after learning what a glass is can we easily recover its properties on a still 2D image.
So at least for learning (might be skippable at inference), it makes a lot of sense to me to have more than 2D still images.
> Two cameras of any resolution spaced a regular distance apart should be able to build a better parallax 3D model than any one camera alone.
This is true if the platform isn't moving.
If you have the time dimension and you have good knowledge of motion between frames (difficult), you can use the two views as a virtual stereo pair. This is called monocular visual/inertial-SLAM. You can supplement with GPS, 2D lidar, odometry and IMU to probabalistically fuse everything together. There have been some nice results published over the years.
But in general yes, you'll always be better off if you have a proper stereo pair with a camera either side of the car.
> humans can build near perfect 3D representations of the world
The idea that the human brain has a "near perfect" 3D representation of one's surroundings seems inaccurate to me. There's a difference between near perfection and good enough that people don't often get hurt, when all of their surroundings are deliberately constructed to limit exposure to danger.
For human level driving a human level understanding of these scene from purely visual information is quite good enough. The first problem, though, is that the human brain has far more processing power than any computer that can fit in a car and probably more than any single computer yet constructed (estimating even to a single order of magnitude is hard). We're also leveraging millions of years of evolution though I'm not entirely sure how much of a difference that makes given how different our ancestral environment was from driving a car.
The other thing is that we, ideally, want a computer to drive a car better than a human can. There's a lot to be gained from having precise rather than approximate notions or other objects' distances and speeds in terms of driving both safely and efficiently. Now, Tesla has also got that Radar which when fused with visual data will help somewhat but I'm not sure how far that can get them.
But most of the time we are not building a 3d map from points. we are building it from object inference.
There are many advantages that we have over machines:
o The eye seens much beter in the dark
o It has a massive dynamic range, allowing us to see both light and dark things
o it moves to where the threat is
o if it's occluded it can move to get a better image
o it has a massive database of objects in context
o each object has a mass, dimension, speed and location it should be seen in
None of those are 3d maps, they are all inference, where one can derrive the threat/advantage based on history.
We can't make machines do that yet.
you are correct that two cameras allows for better 3d pointcloud making in some situations. but a moving single camera is better than a static multiview camera.
however even then the 3d map isn't all that great, and has a massive latency compared to lidar.
I think most of our ability to judge relative distance is based on our brains judgement of lighting, texture, inference, and sound. While having two eyes helps a lot, you can still navigate a complex office environment with one eye closed. It just takes a bit more care.
The most rudimentary life forms are little factories that build themselves. I think we should concentrate on making cars that build themselves and maybe then our technology will be sophisticated enough to consider looking into giving our cars human-like optical processing faculties.
Otherwise we'll just have to figure out how to build autonomous vehicles with the technology we have, which is pretty crappy in comparison to biology in a lot of ways still.
You cannot have false negatives. Ever. You cannot have a situation where the system doesn't see a pedestrian and runs over them at without noticing. So you need to make a very convincing argument that it can't happen.
With cameras and computer vision there's no way to prove it. There is always a chance that it will glitch out for a second and kill someone.
> near perfect 3D representations of the world with 2D images
This is ridiculous.
I am sitting in front of a monitor right now. Please explain how I can perfectly determine the depth of it even though I can't see behind it ? I can move my ahead all around it to capture hundreds of different viewpoints but a car can't do that.
They are refuting a claim that wasn't made. If they need Lidar to do better annotations, fine. You'd only need the lidar on data collection/R&D cars though, and could just use cameras on production cars.
The point Musk and others are making though is that the lidar on the market today has poor performance in weather. The cameras will struggle to a degree in weather as well, so not having good annotations when your dev car is driving though rain is exactly the time when you need the ground-truth to be as clean as possible.
They are saying that lidar enhances the perception system to get more accurate dimensions and rotations of objects to a greater distance.
this means that you can predict far better, allowing you, for example, to drive at night full speed.
Weather affects visual systems as well. The "ooo rain kills lidar" is noise at best. Visual cameras are crap at night.
There is a reason that the radar augmented depth perception demo is in bright light, no rain. Because it almost certainly doesn't work as well at night, and will probably need a separate model.
Some of this feels very cherry-picked. They’re comparing lidar vs camera on snapshots, when a model will always be continuously built as the scene changes.
There’s also one instance where it gives lidar the advantage because it’s mounted on top of the car and can see over signs. What?!
I also feel that they make the 2D annotator's job very hard. I wore an eye patch yesterday (having fun with kids) and reality became extremely confusing. Our brain does not annotate on static 2D images. We annotate on stereoscopic video of moving objects.
This article is only considering static images. Lidar static "images" necessarily contain depth information so yeah obviously they'll have better depth estimates.
But that's really beside the point because the world is not static and any system attempting self-driving will need to take that into account.
Using parallax measurements which is what Tesla says they are doing, you can dramatically increase the estimates of depth measurements by comparing multiple frames of 2D images.
Also, just a reminder that Tesla is also using radar in conjunction with the cameras.
This was my question as well. How good are systems over a stream of data?
I am not expert in this field: how tracking actually works with a time dimension? These must be some sort of "state" carried over frame-by-frame? What is the "size" of this state? Objects just do not disappear and reappear for certain frames? This latter effect you can often see on many automatic labeling demos you find on GitHub.
I think it's completely clear at this point that Tesla(or more specifically, Elon Musk) is just simply lying about what their cars will be able to do in the future with existing hardware. Don't get me wrong - the existing "autopilot" is fantastically good. But it's not going to jump from where it is now to full self driving, no matter how many years or millions are poured into it.
This is ignoring the Elephant in the room: The AI is not good enough often enough for general purpose AVs.
In restricted settings it will be great (container terminals, warehouses...) but from every thing I have seen, from the outside as I am not a insider, the last little bit of safety seems unobtainable with neural networks.
I so want to be wrong, and please tell me why I am. I want my next care to have a cocktail cabinet and drive smoothly enough to balance my champagne flute on the arm rest.
Cars will reach 50% and 75% and 95% autonomy but they won’t reach 100% unless we change infra to be controlled. So long as they are driving among humans on roads made for humans they will never be 100% autonomous. 100% autonomy might sound like just a little more than 95% but it’s not. At 100 is where a car can be built to not have a driver. Its passengers can be drunk or not know how to drive. It’s a huge difference from 95% or 99%.
I think when cars are 95 or 99% autonomous they will be sold with human remote control so there will be centers where manufacturers have hundreds of remote drives ready to intervene and handle the last 5% or 1% of situations. Ther race to AV profitability will be won by the manufacturer with the smallest army of backup drivers.
Let's just drive multi-ton vehicles instead [1]. Highway driving might be easier to visually parse but higher speeds and probably less controlled kinematics (i.e. does the software know how to adjust for the cargo) give one pause:
But remember that accuracy of drawing bounding boxes around objects in still frames is only very slightly related to actual self-driving ability, even if intuition suggests otherwise.
This is basically just an ad for Scale and Scale's services, which include... drawing bounding boxes around objects in still frames.
I think all of this discussion about whether or not LIDAR or cameras are better misses the point that really matters- Will cameras actually be good enough to get the job done? If they are then it doesn't matter which is better. You can always add additional sensors and get more information, but engineering has always been a cost vs benefit problem. If adding LIDAR doesn't give a significant benefit in scenarios that cameras are not already good enough, then they might not be worth the additional expense.
The weird thing about this article is that it's only comparing annotation performance, which is important, but not what you should ultimately care about. If you trained a visual model using annotated lidar for ground truth, then you might expect better performance from the model than from human annotations of the image alone, and certainly better than a model trained on those annotations.
Seems like they are either incompetent or cooking the data. When converting from the image to a top view shape / outline, one would design and or train the system to adjust for perspective. Clearly they have not bothered to do that.
And the title is inflammatory. Nobody who understands the discussion is talking only about camera versus lidar. It’s more about camera+radar versus camera+radar+lidar, and other comparisons between other hybrid or standalone sensor combinations. It’s not as simple as one versus the other... surprised we still have to point this out to them.
Yes, drastically. But with it you bring new fears of how that data will be used by the government. You could also achieve something similar by forcing all vehicles to have embedded sensors that real time share data.
Cooperative strategies open up a lot of new attack vectors. I don't trust those companies to design systems robust enough, especially if they'd have to build a standard together with competitors.
And in the end, your car has to be able to come to a safe stop and avoid dangers no matter the situation. Even with no other cars around or communication interrupted. To reliably achieve this will probably get you most of the way to "real" self driving, with humans/remote operators manually taking care of the few remaining cases.
I suspect this is yet another story sponsored by the Tesla shorts. Just saw an excellent two hour interview by Lex Fridman of George Hotz and he goes into details why he thinks camera will win over lidar.
But he also admits that presently Google is ahead of everyone in the race for level 5, but raises the question of whether they can ever do it economically enough to make money on it?
I also listened to the podcast. George made it sound like the Lidar wasn’t being used for much. It augments the maps to help determine more precise location?
For anyone else who skipped the article - this story likely isn't sponsored by Tesla as it takes a very critical view of Camera-only self driving sensors.
>If your perception system is weak or inaccurate, your ability to forecast the future will be dramatically reduced.
This is reasoning is exactly backwards. If your perception system can forecast accurately, it simply must not be weak or inaccurate.
The question here is, what is important information for a system to perceive to make accurate forecasts? Lidar might help a bit... But we know it simply is not required.
This completely fails to address Musk's argument: that for a L5 car you need to be able to drive in inclement weather where LIDAR does not work reliably.
Musk may be right or wrong, but this article is a non-sequitur.
Suspicious of what? None of those got any comments. Sometimes stories slip off the radar before being noticed, the first time, or first several times. And HN even posts 'dupes' themselves sometimes, to give undercommented stories another go. see https://news.ycombinator.com/item?id=11662380
I submitted a blog post the other day that got 150 comments - I only noticed afterwards it had already been submitted 6 or 7 times before in the months preceding, each without attracting any comments.
No need to be suspicious. AFAIK, HN encourages folks to reshare if they think that a quality post slipped off without much engagement. I even got an email from someone at HN encouraging me to reshare my couple of years' old post again to see if it sticks this time around.
I side with Elon on this. Except he's kinda of a cheap bastard and my idea of using Lytro cameras instead of cheap ones used by Tesla won't actually fly with him. So yeah, use Lytro and you can forget about Lidar altogether.
Animats|6 years ago
Waymo handling city traffic with LIDAR.[2] They show the scan lines and some of the path planning. Busy city streets, lots of obstacles.
Tesla self-driving demo, April 2019.[3] They show their display which puts pictures of cars and trucks on screen. No difficult obstacles are encountered. Recorded in the Palo Alto hills and on I-280 on a very quiet day. The only time it does anything at all hard is when it has to make a left turn from I-280 south onto Page Mill, where the through traffic does not stop. [3] Look at the display. Where's the cross traffic info?
Tesla's 2016 self driving video [5] is now known to have been made by trying over and over until they got a successful run with no human intervention. The 2019 demo looks similar. Although Tesla said they would, they never actually let reporters ride in the cars in full self driving mode.
[1] http://gmauthority.com/blog/2019/06/how-cruise-self-driving-...
[2] https://www.youtube.com/watch?v=B8R148hFxPw
[3] https://www.youtube.com/watch?v=nfIelJYOygY
[4] https://youtu.be/nfIelJYOygY?t=353
[5] https://player.vimeo.com/video/188105076
693471|6 years ago
Tesla's display does not render all of the data that the computer knows about.
Additionally this article is assuming the camera based solution for Tesla will be single-camera. Last I checked the actual solution is going to be stereo vision of multiple cameras (think one on each side of windshield) and using ML to combine that data. The Model 3 does not have that capability though because its three cameras are center mounted.
m3at|6 years ago
This is the main takeaway. Unsurprising but interesting nonetheless. I'm working in the field and it confirms my experience.
However they have a big bias that need to be pointed out:
[...] we must be able to annotate this data at extremely high accuracy levels or the perception system’s performance will begin to regress.
Since Scale has a suite of data labeling products built for AV developers, [...]
Garbage in, garbage out; yes annotation quality matters. But they're neglecting very promising approaches that allow to leverage non-annotated datasets (typically standard rgb images) to train models, for example self-supervised learning from video. A great demonstration of the usefulness of self-supervision is monocular depth estimation: taking consecutive frames (2D images) we can estimate per pixel depth and camera ego-motion by training to wrap previous frames into future ones. The result is a model capable of predicting depth on individual 2D frames. See this paper [1][2] for example.
By using this kind of approach, we can lower the need for precisely annotated data.
[1] https://arxiv.org/abs/1904.04998
[2] more readable on mobile: https://www.arxiv-vanity.com/papers/1904.04998/
Edit: typo
donkeyd|6 years ago
Yeah, I find it odd that they're bringing up Elon's statement about LiDAR, but then completely ignore that they spoke about creating 3D models based on video. They even showed [0] how good of a 3d model they could create based on dat from their cameras. So they could just as well annotate in 3D.
0: https://youtu.be/Ucp0TTmvqOE?t=8217
tgog|6 years ago
sairahul82|6 years ago
Humans do lot more than just identifying an image or doing 3d reconstruction. We have context about the roads, we constantly predict the movement of other cars, we do know how to react based on the situation and most importantly we are not fooled by simple image occlusions. Essentially we have a gigantic correlation engine that takes decision based on comprehending different things happening on the road.
The AI algorithms we teach does not work in the same way as we do. They overly depend on the identifying the image. Lidar provides another signal to the system. It provides redundancy and allows the system to take the right decision. Take the above linked image for an example.
We may not need a lidar once the technology matures but at this stage it is a pretty important redundant system.
Complexicate|6 years ago
Easy examples of this are optical illusions, ghosts, and ufos. There is also "selective attention tests" where a majority of people miss glaringly obvious events right in front of them, when they're focusing on something else. Regular people also tend to bump into things, spill things, and trip, even when going 3 miles an hour (walking speed).
m3at|6 years ago
We learn objects representations by interacting with them over years in a multi modal fashion. Take for example a simple drinking glass: we know its material properties (it is transparent, solid, can hold liquids), its typical position (stay on a tabletop, upright with the open side on top), its usage (grab it with a hand and bring to mouth)...
We also make heavy use of the time dimension, as over a few seconds we see the same objects from different view points and possibly in different states.
Only after learning what a glass is can we easily recover its properties on a still 2D image.
So at least for learning (might be skippable at inference), it makes a lot of sense to me to have more than 2D still images.
joshvm|6 years ago
> Two cameras of any resolution spaced a regular distance apart should be able to build a better parallax 3D model than any one camera alone.
This is true if the platform isn't moving.
If you have the time dimension and you have good knowledge of motion between frames (difficult), you can use the two views as a virtual stereo pair. This is called monocular visual/inertial-SLAM. You can supplement with GPS, 2D lidar, odometry and IMU to probabalistically fuse everything together. There have been some nice results published over the years.
But in general yes, you'll always be better off if you have a proper stereo pair with a camera either side of the car.
microcolonel|6 years ago
The idea that the human brain has a "near perfect" 3D representation of one's surroundings seems inaccurate to me. There's a difference between near perfection and good enough that people don't often get hurt, when all of their surroundings are deliberately constructed to limit exposure to danger.
Symmetry|6 years ago
The other thing is that we, ideally, want a computer to drive a car better than a human can. There's a lot to be gained from having precise rather than approximate notions or other objects' distances and speeds in terms of driving both safely and efficiently. Now, Tesla has also got that Radar which when fused with visual data will help somewhat but I'm not sure how far that can get them.
KaiserPro|6 years ago
but it takes at least 10 years to train.
But most of the time we are not building a 3d map from points. we are building it from object inference.
There are many advantages that we have over machines:
o The eye seens much beter in the dark o It has a massive dynamic range, allowing us to see both light and dark things o it moves to where the threat is o if it's occluded it can move to get a better image o it has a massive database of objects in context o each object has a mass, dimension, speed and location it should be seen in
None of those are 3d maps, they are all inference, where one can derrive the threat/advantage based on history.
We can't make machines do that yet.
you are correct that two cameras allows for better 3d pointcloud making in some situations. but a moving single camera is better than a static multiview camera.
however even then the 3d map isn't all that great, and has a massive latency compared to lidar.
jsharf|6 years ago
mcqueenjordan|6 years ago
Fricken|6 years ago
Otherwise we'll just have to figure out how to build autonomous vehicles with the technology we have, which is pretty crappy in comparison to biology in a lot of ways still.
unknown|6 years ago
[deleted]
mantap|6 years ago
With cameras and computer vision there's no way to prove it. There is always a chance that it will glitch out for a second and kill someone.
threeseed|6 years ago
This is ridiculous.
I am sitting in front of a monitor right now. Please explain how I can perfectly determine the depth of it even though I can't see behind it ? I can move my ahead all around it to capture hundreds of different viewpoints but a car can't do that.
sdenton4|6 years ago
cameldrv|6 years ago
The point Musk and others are making though is that the lidar on the market today has poor performance in weather. The cameras will struggle to a degree in weather as well, so not having good annotations when your dev car is driving though rain is exactly the time when you need the ground-truth to be as clean as possible.
KaiserPro|6 years ago
They are saying that lidar enhances the perception system to get more accurate dimensions and rotations of objects to a greater distance.
this means that you can predict far better, allowing you, for example, to drive at night full speed.
Weather affects visual systems as well. The "ooo rain kills lidar" is noise at best. Visual cameras are crap at night.
There is a reason that the radar augmented depth perception demo is in bright light, no rain. Because it almost certainly doesn't work as well at night, and will probably need a separate model.
noneckbeard|6 years ago
There’s also one instance where it gives lidar the advantage because it’s mounted on top of the car and can see over signs. What?!
anticristi|6 years ago
WhompingWindows|6 years ago
loourr|6 years ago
But that's really beside the point because the world is not static and any system attempting self-driving will need to take that into account.
Using parallax measurements which is what Tesla says they are doing, you can dramatically increase the estimates of depth measurements by comparing multiple frames of 2D images.
Also, just a reminder that Tesla is also using radar in conjunction with the cameras.
miohtama|6 years ago
I am not expert in this field: how tracking actually works with a time dimension? These must be some sort of "state" carried over frame-by-frame? What is the "size" of this state? Objects just do not disappear and reappear for certain frames? This latter effect you can often see on many automatic labeling demos you find on GitHub.
gambiting|6 years ago
heyflyguy|6 years ago
worik|6 years ago
alkonaut|6 years ago
I think when cars are 95 or 99% autonomous they will be sold with human remote control so there will be centers where manufacturers have hundreds of remote drives ready to intervene and handle the last 5% or 1% of situations. Ther race to AV profitability will be won by the manufacturer with the smallest army of backup drivers.
yo-scot-99|6 years ago
[1] https://www.theverge.com/2019/8/15/20805994/ups-self-driving...
trevyn|6 years ago
This is basically just an ad for Scale and Scale's services, which include... drawing bounding boxes around objects in still frames.
eatporktoo|6 years ago
asdf21|6 years ago
glalonde|6 years ago
natch|6 years ago
And the title is inflammatory. Nobody who understands the discussion is talking only about camera versus lidar. It’s more about camera+radar versus camera+radar+lidar, and other comparisons between other hybrid or standalone sensor combinations. It’s not as simple as one versus the other... surprised we still have to point this out to them.
airnomad|6 years ago
And if we also have cars share their sensor data?
Would that speed things up in terms of achieving full autonomy?
govg|6 years ago
Faark|6 years ago
And in the end, your car has to be able to come to a safe stop and avoid dangers no matter the situation. Even with no other cars around or communication interrupted. To reliably achieve this will probably get you most of the way to "real" self driving, with humans/remote operators manually taking care of the few remaining cases.
rmason|6 years ago
But he also admits that presently Google is ahead of everyone in the race for level 5, but raises the question of whether they can ever do it economically enough to make money on it?
https://www.youtube.com/watch?v=iwcYp-XT7UI 2 hours!
Money quote is when Lex tells him, ""Some non-zero part of your brain has a madman in it"
I'd argue that is true of many of the greatest inventors of our time.
melling|6 years ago
I also listened to the podcast. George made it sound like the Lidar wasn’t being used for much. It augments the maps to help determine more precise location?
nemothekid|6 years ago
Edit: misread the parent comment
sidcool|6 years ago
jamesrom|6 years ago
This is reasoning is exactly backwards. If your perception system can forecast accurately, it simply must not be weak or inaccurate.
The question here is, what is important information for a system to perceive to make accurate forecasts? Lidar might help a bit... But we know it simply is not required.
mark_l_watson|6 years ago
aidenn0|6 years ago
Musk may be right or wrong, but this article is a non-sequitur.
DannyBee|6 years ago
Except people don't drive reliably in inclement weather at all, so you don't really want that as the gold standard.
Training a car to be as good as average people driving in the rain/snow would be horrible.
threeseed|6 years ago
So what was Musk's point ?
unknown|6 years ago
[deleted]
m463|6 years ago
https://news.ycombinator.com/item?id=20677720
https://news.ycombinator.com/item?id=20680495
https://news.ycombinator.com/item?id=20683288
https://news.ycombinator.com/item?id=20686791
https://news.ycombinator.com/item?id=20705890
dang|6 years ago
yesenadam|6 years ago
I submitted a blog post the other day that got 150 comments - I only noticed afterwards it had already been submitted 6 or 7 times before in the months preceding, each without attracting any comments.
gordon_freeman|6 years ago
unnouinceput|6 years ago
rpmisms|6 years ago