Since lidar has distance information and cameras do not, it was always a ridiculous idea by a certain company to use cameras only. Lidar using cars are going to replace at least the ones that don't make use of this obvious answer to obstacle detection challenges.
runjake|6 days ago
https://archive.is/PPiVG
And here's one of Elon's mentions (he also has talked about it quite a bit in various spots).
https://xcancel.com/elonmusk/status/1959831831668228450?s=20
Edit: My personal view is that LiDAR and other sensors are extremely useful, but I worked on aircraft, not cars.
willio58|6 days ago
- cost (no longer a problem)
- too much code needed and it bloats the data pipelines. Does anyone have any actual evidence of this being the case? Like yes, code would be needed, but why is that innately a bad thing? Bloated data pipelines feels like another hand-wave when I think if you do it right it’s fine. As proven by Waymo.
Really curious if any Tesla engineers feel like this is still the best way forward or if it’s just a matter of having to listen to the big guy musk.
I’ve always felt that relying on vision only would be a detriment because even humans with good vision get into circumstances where they get hurt because of temporary vision hindrances. Think heavy snow, heavy rain, heavy fog, even just when you crest a hill at a certain time of day and the sun flashes you
AnotherGoodName|6 days ago
They don’t focus on safety or effectiveness except to say that vision should be ‘sufficient’. Which is damning with faint praise imho.
If that link was to try and argue that the removal of sensors makes perfect sense i have to point out that anyone that reads that would likely have their negative viewpoint hardened. It was done to reduce cost (back when the sensors were 1000’s) and out of a ridiculous desire by Musk for minimalism. It’s the same desire that removed the indicator stalk i might add.
kappi|6 days ago
utopcell|5 days ago
Karpathy’s main points: Extra sensors add cost to the system, and more importantly complexity. They make the software task harder, and increase the cost of all the data pipelines. They add risk and complexity to the supply chain and manufacturing. Elon Musk pushes a philosophy of “the best part is no part” which can be seen throughout the car in things like doing everything through the touchscreen. This is an expression of this philosophy. Vision is necessary to the task (which almost all agree on) and it should also be sufficient as well. If it is sufficient, the cost of extra sensors and tools outweighs their benefit. Sensors change as parts change or become available and unavailable. They must be maintained and software adapted to these changes. They must also be calibrated to make fusion work properly. Having a fleet gathering more data is more important than having more sensors. Having to process LIDAR and radar produces a lot of bloat in the code and data pipelines. He predicts other companies will also drop these sensors in time. Mapping the world and keeping it up to date is much too expensive. You won’t change the world with this limitation, you need to focus on vision which is the most important. The roads are designed to be interpreted with vision.
galangalalgol|6 days ago
estearum|6 days ago
The reasoning was simply that LIDAR was (and incorrectly predicted to always be) significantly more expensive than cameras, and hypothetically that should be fine because, well, humans drive with only two eyes.
Musk miscalculated on 1) cost reduction in LIDAR and 2) how incredible the human brain is compared to computers.
Having similar sensors certainly doesn't guarantee your accidents look the same, so I don't think your logic is even internally sound.
bluGill|6 days ago
also regulators gather srastics and if cars with something do better they will mandate it.
small_model|6 days ago
JumpCrisscross|5 days ago
“A federal judge” recently “rejected Tesla's request to overturn a $243 million jury verdict over the 2019 crash of an Autopilot-equipped Model S” [1]. If a human supervising still incurs liability, human-like errors, particularly if Waymo and BYD aren’t making them, is a poor defense.
[1] https://www.reuters.com/world/us-judge-upholds-243-million-v...
georgeecollins|6 days ago
It may just be faster to make lidar cheap. And lidar can do things humans can't.
bko|6 days ago
It's not fair to say that vision based models will "make the same mistakes people do" as >99% of the mistakes people make are avoidable if these issues were addressed. And a computer can easily address all those issues
lesuorac|6 days ago
xnx|6 days ago
lazide|6 days ago
Someone|6 days ago
Human eyes do not have distance information, either, but derive it well enough from spatial (by ‘comparing’ inputs from 2 eyes) or temporal parallax (by ‘comparing’ inputs from one eye at different points in time) to drive cars.
One can also argue that detecting absolute distance isn’t necessary to drive a car. Time to-contact may be more useful. Even only detecting “change in bearing” can be sufficient to avoid collision (https://eoceanic.com/sailing/tips/27/179/how_to_tell_if_you_...)
Having said that, LiDAR works better than vision in mild fog, and if it’s possible to add a decent absolute distance sensor for little extra cost, why wouldn’t you?
tsimionescu|6 days ago
larsnystrom|6 days ago
dymk|6 days ago
Single human eyes do resolve depth perception. Not as good as binocular vision, but you don't loose all depth perception of you lose an eye.
https://en.wikipedia.org/wiki/Monocular_vision
dumbfounder|6 days ago
idiotsecant|6 days ago
nlitened|6 days ago
RobotToaster|6 days ago
Neither do cameras, or eyeballs.
Zigurd|6 days ago
zozbot234|6 days ago
lazide|6 days ago
Also, military sensor use shows the best answer is to have as many different types of sensors as possible and then do sensor fusion. So machine vision, lidar, radar, etc.
That way you pick up things that are missed by one or more sensor types, catches problems and errors from any of them, and end up with the most accurate ‘view’ of the world - even better than a normal human would.
It’s what Waymo is doing, and they also unsurprisingly, have the best self driving right now.
brk|6 days ago
Computer vision does not work exactly like human vision, closely equating the two has tended to work out poorly in extreme circumstances.
High performance fully automated driving that relies solely on vision is a losing bet.
zemvpferreira|6 days ago
philistine|6 days ago
heisenbit|6 days ago
Yossarrian22|6 days ago
idiotsecant|6 days ago
theappsecguy|6 days ago
foooorsyth|6 days ago
The appeal to human biology and argument against fusion between disparate sensors kinda falls flat when you’re building a world model by fusing feeds from cameras all around the car. Humans don’t have 8 eyes in a 360 array around their head. What they do have is two eyes (super cameras) on ~180 degree swiveling and ~180 degree tilting gimbal. With mics attached that help sense other vehicles via road noise. And equilibrioception, vibration detection, and more all in the same system, all fused. If someone were actually building this system to drive the car, the argument based on “how did you drive here today?” gets a lot stronger. One time I had some water blocking my ear and I drove myself to the hospital to get it fixed. That was a shockingly scary drive — your hearing is doing a lot of sensing while driving that you don’t value until it’s gone.
spyder|6 days ago
peterfirefly|6 days ago
radial_symmetry|6 days ago
tw04|6 days ago
“Just buy FSD” isn’t a reasonable answer to a problem literally no other automaker suffers from.
thinkcontext|6 days ago
https://electrek.co/2026/02/17/tesla-robotaxi-adds-5-more-cr...
DustinBrett|6 days ago
xpe|6 days ago
This conversational disconnect is as old as the hills:
1. Person 1 asks "what's wrong" (if it ain't broke don't fix it)
2. Person 2 wants to make something better
My meta-goal here on HN (and many places where people converse) is for people to step back and recognize the conversational context and not fall into the predictable patterns that prevent us from making sense of the world as best as we can.
Mawr|6 days ago
Well, you did get a chuckle out of me, so that's something!
Phil_Latio|6 days ago
wasmainiac|6 days ago
zelphirkalt|6 days ago
I have no proof of course and it might be coincidence, or just difference of mindset between US citizens and Europe citizens. It happened a few times already and to me looks sus.
But if they actually read and not just ctrl+f <company name>, then of course not writing the company name, but hinting at it in an obvious way is no more helpful either.
uyzstvqs|6 days ago
pwarner|6 days ago
mgoetzke|6 days ago
tsimionescu|6 days ago
Note that humans do not rely strictly on our eyes as cameras to measure distances. There is a huge amount of inference about the world based on our internal world models that goes into vision. For example, if you put is in a false-perspective or otherwise highly artifical environment, our visual acuity goes down significantly; conversely, people with a single eye (so no parallax-based measurement ability) still have quite decent depth perception compared to what you'd naively expect. Not to mention, our eyes are kept very clean, and maintain their alignment to a very high degree of precision.
numpad0|6 days ago
You can solve this by adding an emitter next to the camera that does something useful, be it just beaconing lights or noise patterns or phase synced laser pulses. And those "active cameras" are what everyone call LIDARs.
ImPostingOnHN|6 days ago
throwa356262|6 days ago
xpe|6 days ago
"Necessary"? Seems like a straw man, don't you think? I strive to argue against the strongest reasonable claim someone is making.
Lots of reasonable people suggest LIDAR is helpful to fill in gaps when vision is compromised, degraded, or less capable.
People running businesses, of course, will make economic trade-offs. That's fine. But don't confuse, say, Elon's economic tradeoff with the full explanation of reality which must include an awareness that different sensors have different strengths in different contexts.
So, when one thinks about what sensor mix is best for a given application, one would be wise to ask (and answer) such questions as:
- What is the quality bar?
- What sensors are available?
- Wow well do various combinations of sensors work across the range of conditions that matter for the quality bar?
- WRT "quality bar": who gets to decide "what matters"? The company making the cars? The people that drive them? regulators that care about public safety. The answer: it is a complex combination.
It is time to dismiss any claim (or implication) that "technology good, regulation bad". That might be the dumbest excuse for a philosophy I've ever heard. It is the modern-day analogue of "Brawndo's got what plants crave." Smart people won't make this argument outright, but unfortunately, their claims sometimes reduce to this level of absurdity. Neither innovation nor regulation are inherently good nor bad. There are deeper principles in play.
Yes, some individuals would use their self-proclaimed freedom to e.g. drive without seatbelts at 100 mph at night with headlights off. An extreme example, but it is the logical extension of pure individualism run amok. Regulators and anyone who cares about public safety will draw a line somewhere and say "No. Individual stupidity has a limit." Even those same people would eventually come to their senses after they kill someone, but by then it is too late.
nova22033|6 days ago
Phil_Latio|6 days ago
There are probably even earlier statements from him against lidar...
jollyllama|6 days ago
rustystump|6 days ago
But cost isnt the issue as much.
SecretDreams|6 days ago
Individual cameras don't have distance information, but you can easily calibrate a system of cameras to give you distance information. Your eyes do this already, albeit not quantitatively. The quantitative part comes from math our brains aren't setup to do in real time.
dzhiurgis|6 days ago
My father lost vision in 1 eye and 50% in other one something like 20 years ago. He struggles in parking but otherwise doing ok without lidar. Turns out motion vision is more accurate after 10-20 meters than stereoscopic vision.
leptons|6 days ago
moogly|6 days ago
DonsDiscountGas|6 days ago
FrustratedMonky|6 days ago
If this lowers Lidar costs, and Tesla has spent all this time refining the camara technology. Now have both.
Use both.
pbreit|6 days ago
turtlesdown11|6 days ago
DoesntMatter22|6 days ago
NedF|6 days ago
[deleted]
MetaWhirledPeas|6 days ago
Why are the commenters not pissed at the dozens of other car companies who have done absolutely nothing in this space? Answer: because it's not nearly as fun to be pissed at Kia or Mercedes or whoever. Clearly they are just enjoying the shared anger, regardless of whether it is justified.
array_key_first|6 days ago
2. Other car companies are properly valued, Tesla is overinflated.
3. Other cars, even basic Hondas, have the same level of self driving as Teslas.
4. Other car companies don't lie to their customers about their capabilities or what they're buying.
TulliusCicero|6 days ago
Surely you already know this, so why pretend otherwise?
epolanski|6 days ago
superxpro12|6 days ago
I think the frustration stems from the obvious falsehoods in the advertising, and the doubling-down on the tech, despite the well-documented weaknesses of the implementation.
bko|6 days ago
https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson...
thunky|6 days ago
Because we want self driving cars to be safer than human driven cars.
If humans had built in lidar we would use it when driving.
afavour|6 days ago
“We should achieve self driving cars via replicating the human brain” strikes me as an incredibly inefficient and difficult way to solve the problem.
Analemma_|6 days ago
a_better_world|6 days ago
Science would like to point out that rats also can learn to drive
https://theconversation.com/im-a-neuroscientist-who-taught-r...
jeltz|6 days ago
Ajedi32|6 days ago
Whether or not it'll actually work remains to be seen, but it's a perfectly reasonable strategy. One counterargument would be that the bitter lesson can be applied to LIDAR too; you don't have to use that data for feature engineering just because it seems well suited for it.
elicash|6 days ago