top | item 44921023

(no title)

thephotonsphere | 6 months ago

Tesla uses only cameras, which sounds crazy (reflections, direct sunlight disturbances, fog , smoke, etc.

LiDAR, radar assistance feels crucial

https://fortune.com/2025/08/15/waymo-srikanth-thirumalai-int...

discuss

order

latexr|6 months ago

Indeed. Mark Rober did some field tests on that exact difference. LiDAR passed all of them, while Tesla’s camera-only approach failed half.

https://www.youtube.com/watch?v=IQJL3htsDyQ

randallsquared|6 months ago

I'm not sure the guy who did the Tesla crash test hoax and (partially?) faked his famous glitterbomb pranks is the best source. I would separately verify anything he says at this point.

ACCount37|6 months ago

Humans use only cameras. And humans don't even have true 360 coverage on those cameras.

The bottleneck for self-driving technology isn't sensors - it's AI. Building a car that collects enough sensory data to enable self-driving is easy. Building a car AI that actually drives well in a diverse range of conditions is hard.

tfourb|6 months ago

That's actually categorically false. We also use sophisticated hearing, a well developed sense of inertia and movement, air pressure, impact, etc. And we can swivel our heads to increase our coverage of vision to near 360°, while using very dependable and simple technology like mirrors to cover the rest. Add to that that our vision is inherently 3D and we sport a quite impressive sensor suite ;-). My guess is that the fidelity and range of the sensors on a Tesla can't hold a candle to the average human driver. No idea how LIDAR changes this picture, but it sure is better than vision only.

I think there is a good chance that what we currently call "AI" is fundamentally not technologically capable of human levels of driving in diverse conditions. It can support and it can take responsibility in certain controlled (or very well known) environments, but we'll need fundamentally new technology to make the jump.

svara|6 months ago

A fine argument in principle, but even if we talk only about vision, the human visual system is much more powerful than a camera.

Between brightly sunlit snow and a starlit night, we can cover more than 45 stops with the same pair of eyeballs; the very best cinematographic cameras reach something like 16.

In a way it's not a fair comparison, since we're taking into account retinal adaptation, eyelids/eyelashes, pupil constriction. But that's the point - human vision does not use cameras.

TheOtherHobbes|6 months ago

Humans are notoriously bad at driving, especially in poor weather. There are more than 6 million accidents annually in the US, which is >16k a day.

Most are minor, but even so - beating that shouldn't be a high bar.

There is no good reason not to use LIDAR with other sensing technologies, because cameras-only just makes the job harder.

latexr|6 months ago

> Humans use only cameras.

Not true. Humans also interpret the environment in 3D space. See a Tesla fail against a Wile E. Coyote-inspired mural which humans perceive:

https://youtu.be/IQJL3htsDyQ?t=14m34s

lagadu|6 months ago

Once computers and AIs can approach even a small fraction of the our capacity then sure, only cameras is fine, it's a shame that our suite of camera data processing equipment is so far beyond our understanding that we don't even have models of how it might work at its core.

Even at that point, why would you possibly use only cameras though, when you can get far better data by using multiple complementary systems? Humans still crash plenty often, in large part because of how limited our "camera" system can be.

vrighter|6 months ago

which cameras have stereoscopic vision and the dynamic range of an eye?

Even if what you're saying is true, which it's not, cameras are so inferior to eyes it's not even funny

bayindirh|6 months ago

Even though it's false, let's imagine that's true.

Our cameras (also called eyes) have way better dynamic range, focus speed, resolution and movement detection capabilities, Backed by a reduced bandwidth peripheral vision which is also capable of detecting movement.

No camera, incl. professional/medium format still cameras are that capable. I think one of the car manufacturers made a combined tele/wide lens system for a single camera which can see both at the same time, but that's it.

Dynamic range, focus speed, resolution, FoV and motion detection still lacks.

...and that's when we imagine that we only use our eyes.

BuckRogers|6 months ago

Except a car isn’t a human.

That’s the mistake Elon Musk made and the same one you’re making here.

Not to mention that humans driving with cameras only is absolutely pathetic. The amount of accidents that occur that are completely avoidable doesn’t exactly inspire confidence that all my car needs to be safe and get me to my destination is a couple cameras.

perryizgr8|6 months ago

> only cameras, which sounds crazy

Crazy that billions of humans drive around every day with two cameras. And they have various defects too (blind spots, foveated vision, myopia, astigmatism, glass reflection, tiredness, distraction).

amelius|6 months ago

The nice thing about LiDAR is that you can use it to train a model to simulate a LiDAR based on camera inputs only. And of course to verify how good that model is.

mycall|6 months ago

I can't wait until V2X and sensor fusion comes to autonomous vehicles, greatly improving the detailed 3D mapping of LiDAR, the object classification capabilities of cameras, and the all-weather reliability of radar and radio pings.