Because human vision has very little in common with camera vision and is a far more advanced sensor, on a far more advanced platform (ability to scan and pivot etc), with a lot more compute available to it.
I don't think it's a sensors issue - if I gave you a panoramic feed of what a Tesla sees on a series of screens, I'm pretty sure you'd be able to learn to drive it (well).
The human sensor (eye) isn't more advanced in its ability to capture data -- and in fact cameras can have a wider range of frequencies.
But the human brain can process the semantics of what the eye sees much better than current computers can process the semantics of the camera data. The camera may be able to see more than the eye, but unless it understands what it sees, it'll be inferior.
Thus Tesla spontaneously activating its windshield wipers to "remove something obstructing the view" (happens to my Tesla 3 as well), whereas the human brain knows that there's no need to do that.
Same for Tesla braking hard when it encountered an island in the road between lanes without clear road markings, whereas the human driver (me) could easily determine what it was and navigate around it.
LIDAR based self-driving cars will always massively exceed the safety and performance of vision-only self driving cars.
Current Tesla cameras+computer vision is nowhere near as good as humans. But LIDAR based self-driving cars already have way better situational awareness in many scenarios. They are way closer to actually delivering.
And what driver wouldn't want extra senses, if they could actually meaningfully be used? The goal is to drive well on public roads, not some "Hands Tied Behind My Back" competition.
The human processing unit understands semantics much better than the Tesla's processing unit. This helps avoid what humans would consider stupid mistakes, but which might be very tricky for Teslas to reliably avoid.
Cost, weight, and reliability. The best part is no part.
No part costs less, it also doesn't break, it also doesn't need to be installed, nor stocked in every crisis dealership's shelf, nor can a supplier hold up production. It doesn't add wires (complexity and size) to the wiring harness, or clog up the CAN bus message queue (LIDAR is a lot of data). It also does not need another dedicated place engineered for it, further constraining other systems and crash safety. Not to mention the electricity used, a premium resource in an electric vehicle of limited range.
That's all off the top of my head. I'm sure there's even better reasons out there.
Chimpanzies are better than humans given a reward structure they understand. The next battlefield evilution are chimpanzies hooked up with intravenous cocaine modules running around with 50. cals
I drove into the setting sun the other day and needed to shift the window shade and move my head carefully to avoid having the sun directly in my field of vision. I also had to run the wipers to clean off a thin film of dust that made my windshield difficult to see through. And then I still drove slowly and moved my head a bit to make sure I could see every obstacle. My Tesla doesn’t necessarily have the means to do all of these things for each of its cameras. Maybe they’ll figure that out.
I wouldn't trust a human to drive a car if they had perfect vision but were otherwise deaf, had no proprioception and were unable to walk out of their car to observe and interact with the world.
that's (usually) because our reflexes are slow (compared to a computer), or we are distracted by other things (talking, phone, tiredness, sights, etc. etc.), not because we misinterpret what we see
cannonpr|5 months ago
torginus|5 months ago
lstodd|5 months ago
try matching a cat's eye on those metrics. and it is much simpler that human one.
insane_dreamer|5 months ago
But the human brain can process the semantics of what the eye sees much better than current computers can process the semantics of the camera data. The camera may be able to see more than the eye, but unless it understands what it sees, it'll be inferior.
Thus Tesla spontaneously activating its windshield wipers to "remove something obstructing the view" (happens to my Tesla 3 as well), whereas the human brain knows that there's no need to do that.
Same for Tesla braking hard when it encountered an island in the road between lanes without clear road markings, whereas the human driver (me) could easily determine what it was and navigate around it.
phire|5 months ago
LIDAR based self-driving cars will always massively exceed the safety and performance of vision-only self driving cars.
Current Tesla cameras+computer vision is nowhere near as good as humans. But LIDAR based self-driving cars already have way better situational awareness in many scenarios. They are way closer to actually delivering.
kimixa|5 months ago
tliltocatl|5 months ago
Sharlin|5 months ago
apparent|5 months ago
randerson|5 months ago
dotancohen|5 months ago
No part costs less, it also doesn't break, it also doesn't need to be installed, nor stocked in every crisis dealership's shelf, nor can a supplier hold up production. It doesn't add wires (complexity and size) to the wiring harness, or clog up the CAN bus message queue (LIDAR is a lot of data). It also does not need another dedicated place engineered for it, further constraining other systems and crash safety. Not to mention the electricity used, a premium resource in an electric vehicle of limited range.
That's all off the top of my head. I'm sure there's even better reasons out there.
systemswizard|5 months ago
lstodd|5 months ago
dreamcompiler|5 months ago
ikekkdcjkfke|5 months ago
ndsipa_pomu|5 months ago
insane_dreamer|5 months ago
it has to do with the processing of information and decision-making, not data capture
Y_Y|5 months ago
matthewdgreen|5 months ago
rudolftheone|5 months ago
zeknife|5 months ago
dotancohen|5 months ago
Waterluvian|5 months ago
rcpt|5 months ago
insane_dreamer|5 months ago
nkrisc|5 months ago