(no title)
bhelkey | 1 month ago
For example, the author takes the stance that current self driving cars (Waymo, Zoox) do not count as self driving. The justification being that a human operator is involved some small fraction of the time.
By law, Waymo must report disengagements in California. In 2024, Waymo had ~10 thousand miles driven per disengagement, Zoox had ~28 thousand miles driven per disengagement [1]. I would say that this rate of human intervention qualifies as self driving.
[1] https://thelastdriverlicenseholder.com/2025/02/03/2024-disen...
strange_quark|1 month ago
bhelkey|1 month ago
Looking into reports you mentioned in a child comment, CNN reports Cruise needed human assistance every ~5 miles [1]. And I certainly wouldn't call a system that needs assistance every ~5-10 minutes Level 4 self driving.
Subjectively, it appeared Waymo was significantly better than Cruise in 2023 but without data it's hard know what that means in terms of human intervention.
If Waymo needed human assistance every 10-20 minutes, I would agree that it also doesn't qualify as Level 4 autonomous.
[1] https://www.cnbc.com/2023/11/06/cruise-confirms-robotaxis-re...
brandall10|1 month ago
wpietri|1 month ago
bhelkey|1 month ago
> The definition, or common understanding, of what self driving cars really means has changed since my post on predictions eight years ago. At that time self driving cars meant that the cars would drive themselves to wherever they were told to go with no further human control inputs. It was implicit that it meant level 4 driving. Note that there is also a higher level of autonomy, level 5, that is defined.
ghaff|1 month ago
Honestly, Brooks--who has been presented and self-presented as something of a skeptic with respect to autonomous self-driving--looks like something of an optimist at this point. (In the sense that your kid won't need to learn to drive.)
kqr|1 month ago
bhelkey|1 month ago
The author engages in rules lawyering of the evaluation of the predictions. The original predictions are clear.
Another example of this is the author's prediction that no robot will be able to navigate around the clutter in a US home, "What is easy for humans is still very, very hard for robots."
The author evaluated this prediction as not being met, "...I don't count as home robots small four legged robots that flail their legs quickly to beat gravity, and are therefore unsafe to be around children, and that can't do anything at all with their form factor besides scramble".
The author added constraints not in the original prediction (safe around children, must include a form factor able to preform an action, ...) then evaluated the prediction as accurate because no home robot met the original constraint + the new constraints.