top | item 45145360

(no title)

cannonpr | 5 months ago

Because human vision has very little in common with camera vision and is a far more advanced sensor, on a far more advanced platform (ability to scan and pivot etc), with a lot more compute available to it.

discuss

order

torginus|5 months ago

I don't think it's a sensors issue - if I gave you a panoramic feed of what a Tesla sees on a series of screens, I'm pretty sure you'd be able to learn to drive it (well).

lstodd|5 months ago

yeah, try matching a human eye on dynamic range and then on angular speed and then on refocus. okay forget that.

try matching a cat's eye on those metrics. and it is much simpler that human one.

terminalshort|5 months ago

Who cares? They don't need that. The cameras can have continuous attention on a 360 degree field of vision. That's like saying a car can never match a human at bipedal running speed.

dmos62|5 months ago

I'm curious, in what ways is a cat's vision simpler?

insane_dreamer|5 months ago

The human sensor (eye) isn't more advanced in its ability to capture data -- and in fact cameras can have a wider range of frequencies.

But the human brain can process the semantics of what the eye sees much better than current computers can process the semantics of the camera data. The camera may be able to see more than the eye, but unless it understands what it sees, it'll be inferior.

Thus Tesla spontaneously activating its windshield wipers to "remove something obstructing the view" (happens to my Tesla 3 as well), whereas the human brain knows that there's no need to do that.

Same for Tesla braking hard when it encountered an island in the road between lanes without clear road markings, whereas the human driver (me) could easily determine what it was and navigate around it.