(no title)
balloot | 11 years ago
The problem is that the driving model is probabilistic. When you solve a problem probabilistically, getting from 90% covered to 99% to 99.9% covered to 99.99% covered involves exponential leaps in difficulty. So even if the car covers 99.9% of driving conditions (and it currently doesn't), there's still a tremendous amount of work to be done to get it to 99.9999% correct, or whatever the threshold is for it to be deemed "safe" for fully autonomous use.
I personally am bearish on the technology, as getting the inconvenient final situational cases correct will be extremely challenging. I would love to be proven wrong, but at Stanford I came to the opinion that the probabilistic approach would get us to really cool demos, but never a fully autonomous vehicle. That being said, the people working on this are a whole lot smarter than I, and I would love to be proven wrong.
bagels|11 years ago
kamaal|11 years ago
The idea must be to come up with a generic algorithm that solves these problems as a whole. Not one specific case at a time.
dalore|11 years ago
nicholas73|11 years ago
I recently narrowly avoided getting killed by a broadside collision by braking just in time. If I were further I would have sped up out of the way. Would a probabilistic approach handle this? Maybe they need to compile a list of special edge cases.
balloot|11 years ago
In the end, you need a learning technology that can properly adapt to any possible situation and give a decent response. Maybe it can be done, but we certainly aren't there yet and I'm skeptical as to the the tractability of the last bit of the problem.
agildehaus|11 years ago
All this shows is that the Google car was driving well, and you weren't. Though I'm sure as the tech progresses they'll look into this sort of thing, and will implement what makes sense.
lukesed|11 years ago