(no title)
bluepanda928752 | 4 years ago
However, nowadays it starts to look that 100% reliable depth estimation from cameras might actually require a human-level AI to work and also solid-state LIDAR technology is becoming cheap enough and integrateable into normal cars, but Tesla can't really change their stance on this without admitting that FSD options they already sold would not actually become FSD within the lifetimes of these vehicles. I suspect this might also be the reason why Karpathy looks more and more nervous with each new talk
donio|4 years ago
That's pretty much a given at this point but they will not admit it until a class-action lawsuit forces them to.
bluepanda928752|4 years ago
trhway|4 years ago
you don't need 100%, and even humans are far from 100% (500Mpx resolution of our eyes allows to basically sheer brute force through it in many cases). The stereo setup provides great and fast estimation with several megapixel resolution with good fps (way better than lidar) for majority of situations. It is some share of the [part of the] scenes, and you really know it right then and there, where you need AI and/or very sophisticated compute heavy algorithms. So instead of throwing AI and the compute power at those parts, you just pull the points from the lidar (and even radar if the things are that bad) covering that segment. And that way, given a couple more iterations of sensors (from current 20Mpx+ to the hundreds Mpx) and compute, it will be doing even better than humans. Anybody not doing sensor fusion would be a loser though - just like going into a fist fight with one hand intentionally disabled.
anitil|4 years ago
bluepanda928752|4 years ago