Human eyes are better by most metrics than any camera, and certainly any camera which costs less than a car. Also, obviously, our visual processing is, by most metrics, so much better than the best CV (never mind the sort of CV that can run realtime in a car) that it's not even funny.
they're making fun of Tesla, which stopped putting radar (ed: I misremembered, thanks to the commenter below) in their cars during the pandemic when it got expensive and instead of saying "we can't afford it", claimed it's actually better to not have lidar and just rely on cameras
Yeah! Just add more sensors! We're only 992 more sensors away from full self-driving! It totally works that way!
The debris? The very visible piece of debris? The piece of debris that a third party camera inside the car did in fact see? Adding 2 radars and 5 LIDARs would totally solve that!
For fuck's sake, I am tired of this worn out argument. The bottleneck of self-driving isn't sensors. It was never sensors. The bottleneck of self-drivng always was, and still is: AI.
Every time a self-driving car crashes due to a self-driving fault, you pull the blackbox, and what do you see? The sensors received all the data they needed to make the right call. The system had enough time to make the right call. The system did not make the right call. The issue is always AI.
You want the AI to take the camera's uncertainty about a road-colored object and do an emergency maneuver? You don't want to instead add a camera that sees metal and concrete like night and day?
It’s a lot easier to make an AI that highly reliably identifies dangerous road debris if it can see the appearance and the 3D shape of it. There’s a fair bit of debris out there that just looks really weird because it’s the mangled and broken version of something else. There are a lot of ways to mangle and break things, so the training data is sparser than you’d ideally like.
lazide|5 months ago
raisedbyninjas|5 months ago
Scarblac|5 months ago
aarond0623|5 months ago
unknown|5 months ago
[deleted]
ekianjo|5 months ago
Bengalilol|5 months ago
jefftk|5 months ago
petermcneeley|5 months ago
rsynnott|5 months ago
bananapub|5 months ago
ACCount37|5 months ago
The debris? The very visible piece of debris? The piece of debris that a third party camera inside the car did in fact see? Adding 2 radars and 5 LIDARs would totally solve that!
For fuck's sake, I am tired of this worn out argument. The bottleneck of self-driving isn't sensors. It was never sensors. The bottleneck of self-drivng always was, and still is: AI.
Every time a self-driving car crashes due to a self-driving fault, you pull the blackbox, and what do you see? The sensors received all the data they needed to make the right call. The system had enough time to make the right call. The system did not make the right call. The issue is always AI.
cosmicgadget|5 months ago
cameldrv|5 months ago
AnimalMuppet|5 months ago
preisschild|5 months ago