I avoid new technology, exactly because I'm an engineer.
I wonder if that is just me, but when new technology is introduced and hyped, I usually take a quick look at implementations, research, and talks just to get an idea of what the state of the art really is like.
As a result, I have become the late adopter among my group of friends because the first iteration of any new technology usually just isn't worth the issues.
You know this effect when you open a new book and immediately spot a typo? I felt that way looking at state of the art AI vision papers.
The first paper would crash, despite me using the same GPU as the authors. Turns out they got incredibly lucky not to trigger a driver bug causing random calculation errors.
The second paper converted float to bool and then tried to use the gradient for training. That's just plain mathematically wrong, a step function doesn't have a non-zero gradient.
The third paper only used a 3x3 pixel neighborhood for learning long-distance moves. Doesn't work, I cannot learn about New York by waking around in my bathroom.
That gave me the gut feeling that most people doing the research were lacking the necessary mathematical background. AI is stochastical gradient descent optimization, after all.
Thanks to TensorFlow, it is nowadays easy to try out other people's AI. So I took some photos of my road and put them through the state of the art computer visible AIs trained with KITTY, a self-driving car dataset of German roads. All of them couldn't even track the wall of the house correctly.
So now I'm afraid to use anything self-driving ^_^
I remember going with a friend on a Tesla test drive. He was eager to try the Autopilot system. And within a few minutes we got freaked out with what seem to us as unpredictable action and we turned it off.
I think the engineer should have never relied on it. Especially if he had concerns with the system.
At the same time, "Autopilot" is misleading and Tesla does own responsibility here. Even if it's purely a marketing term it's designed to up sell you on a half-baked technology.
They should call it "Driver assist" or something like "cruise control plus".
I think self driving cars are a ridiculous solution to a problem we created and doesn't need to exist. The US building out the country for cars wasn't a good way to do transportation. If we just had trains going from city to city and subways in the cities, we wouldn't even need self driving cars.
"And within a few minutes we got freaked out with what seem to us as unpredictable action and we turned it off."
On the one hand, I'm affected by the FUD and will not buy a Tesla, and perhaps also will have schadenfreude to the extent their CEO has misfortunes.
On the other hand, to be fair, other driver assistance features are creating problems too. I think it was Subaru I was just reading about having serious problems (and a recall) where automatic braking triggers inappropriately and may cause an accident.
People have incredible faith in software and technology, and that's fine and all, but I wonder what people like that hang around HN for.
My biggest concern is that since the company isn't profitable(?) That these highly risky things are acceptable. Other profitable auto companies cannot be this wreckless.
Autopilot never meant full automatic control. In a plane context, which was the most common until now, it's only used to keep a trajectory steady, and anything unusual has to be taken care of by a human pilot.
beside the death, the worst part is that Tesla/Musk were all about sound science bringing benefits to users. Now it's becoming a stats game over PR. And I'm not even sure anybody bought a Tesla for AP. It was like a clear coat class option.
"Autopilot" is not misleading. You just don't know what autopilot means.
Autopilot on planes does not mean the plane is self-flying and the pilot doesn't have to pay attention anymore to take over at a moment's notice. Why would you think it's suddenly different in a car?
"Data from his phone indicates that a mobile game was active during his drive and that his hands were not on the wheel during the six seconds ahead of the crash."
According to social media, YouTubers, Redditors Tesla's drive themselves and are not simply automobiles with electric propulsion but computers on wheels, they have all the necessary hardware built-in and the software gets a miracle updates over the air, it is called autopilot, there're people sleeping on the highway while their Tesla drives them home, machine learning and AI is about to automate everything and you are short-seller devil of the petrol industry if you question anything.
How fair is to say that the driver should have had their hands on the wheel at all time and checked and acquired a pilot license or at least studied what is autopilot?
Does Tesla's have driver attention detectors and warning systems? Is the car beeping or something like that when you don't hold the wheel?
He complained once before about this stretch, and yet he kept driving auto pilot at the same location? I would never dare to do that, and sounds suicidal. Also I would never dare to fully trust auto-pilot and I think they should change the official name to driver assistance instead, until auto-pilot is ready.
Which is how Tesla want to treat it according to the article: "Tesla says Autopilot is intended to be used for driver assistance and that drivers must be ready to intervene at all times."
I would not be quick to make assumptions like this. He could have experienced the issue like 1/20 times, and intended to remind himself to watch out when he gets to that spot. And maybe he forgot this time. It’s easy to get into a trance on long repetitive drives. It isn’t like he immediately did back to back drives and changed nothing the second time.
And that’s the problem with an “autopilot 99.9% of the time, but still pay attention just in case” feature.
It is not suicidal, but probably worthy of a honorable mention in the annual Darwin Awards roundup since it was a very bad judgement on his part.
As an engineer - he should have known better than to continue to use the feature the moment it became unreliable for reasons unknown. Those kind of failure modes are extremely devious killers.
For every tragedy, there are tragedies avoided. I can attest to a few. In the last 10,000 miles, Autopilot has: safely swerved to avoid a car that would have sideswiped me, preemptively braked after detecting the 2nd car ahead (not visible to me) had slammed on its brakes, and avoided colliding with a completely stopped vehicle in the center lane of the freeway.
And FWIW I've never felt misled about Autopilot's capabilities. I started off skeptical and it's since earned my partial trust in limited scenarios. Ironically its hiccups actually make me more attentive since, ya know, I don't want it to kill me.
How many accidents did you have per 10k miles in your last car? I've never had an accident in the last 200k miles, across three cars; none of which had anything more advanced than regular cruise control.
In the last 10,000 miles I had no accidents. In fact, I had none in the last 50,000 miles. It is a big question if any of what you mention here would be as much as a touch, if you did not have autopilot in the first place.
Yesterday here on HN there was a top thread about how a open dataset used for training self-driving cars is rife with missing labels and mislabeling. [1]
Quite a few commenters insisted the issue was a non-issue -- that all datasets are noisy and mislabeling occurs all the time, that nobody's building actual self-driving cars from it, that surely if an object isn't labeled in one frame it'll be labeled in the next.
Now obviously we don't what the cause of this particular crash was.
But I will say that I found people's willingness to defend widespread sloppy labeling in training sets used for literal high-speed life and death situations rather shocking.
And that, hopefully, crashes like this serve to remind us of the far greater responsibility we have with regards to quality and accuracy when it comes to building ML models for self-driving cars, rather than when we're merely predicting how likely a credit card customer is to pay their next bill on time, or which ad is likely to be most profitable.
Tesla says autopilot works but that driver should be ready to intervene. This is terrifying - like having a kid riding shotgun that might just reach out and turn the wheel while you are on the highway.
Anyone who has driven with autopilot, how quickly might it react to a perceived obstacle? Would it take a hard turn into a median faster than a person could reasonably react if it thought there was something in the road or that the road took a hard left?
Edit Ready to intervene can mean a lot of things - ready to take over in traffic or inclement weather vs firm grip on the wheel to resist rogue hard turns
Actual quote from the article
>Tesla says Autopilot is intended to be used for driver assistance and that drivers must be ready to intervene at all times.
There are going to be Autopilot crashes and people should disable it if they uncomfortable with putting their lives in the hands of Tesla's software. I would be. But I see no reason to not believe to not believe what they have on https://www.tesla.com/VehicleSafetyReport
It shows Teslas have much fewer accidents than the average car, and that Teslas with autopilot enabled are substantially safe per mile than with autopilot disabled (25-50% more miles driven per accident).
The article says a Prius hit the same place as this Tesla the week before. But the NTSB isn't investigating Toyota.
I was quite confused by this. He knew the car was taking a consistently erroneous action that was likely to have fatal consequences, but he continued to use autopilot on that same stretch anyway.
To be clear, I think Tesla has been cavalier and careless in their approach to autopilot.
But Walter Huang knew what he was doing and did it anyway. His death is largely by his own hand. I don't see this as different from people who knowingly take other careless risks and pay a price.
Think of the "dumb" folks who take a selfie on top of a skyscraper and then slip off. It's not until you are careening towards the abyss that the reality of the situation is fully learned. Up until then, only theoretical.
Don't text and autopilot on level 3, which is a nascent technology. An engineer should know better. Wonder why a level 3 is allowed to market as an 'autopilot'.
Tesla has some blame in this for sure but Caltrans (maintainers of the 101) are equally if not more to blame.
California is rich in capital but so poor infrastructure. Moving here from South Carolina(where taxes are much lower and roads are paltry) I was shocked at how dismal a state many of the roads are and at the sheer amount of time it takes to complete.
Patrick Collison of Stripe actually wrote a blog piece(1) condemning SF for their lack of speed when it comes to road maintenance.
There is no end in sight. Until we we a nation hold our governments accountable for road maintenance and proper engineering we will never have any great resolution.
There's a left-hand exit there because there's an interchange between freeways, and the HOV lane exits on the left, to enable free flowing HOV traffic to avoid merging through general traffic to get to a right exit.
At this interchange, I've seen lots of human drivers change between the exiting HOV lane and the continuing HOV lane (or vice versa) much later than is safe; presumably because they were surprised that their lane was exiting, or they noticed the exit too late, but didn't want to miss it. I haven't seen anyone drive in between the two lanes as if it was a lane, though; and usually the late lane changers are braking, not accelerating.
It is unfortunate that the crash attenuator wasn't reset. Looking at the docs for a similar attenuator[1], it seems like resetting is somewhat involved, and you'd need a trained and properly equipped repair crew to do it, even if the time required is not that much. Scheduling is probably an issue.
Why don’t we introduce some of the same requirements for drivers as we have for private pilots (think traditional bi-annual flight reviews)? Not only would we most likely reduce fatalities, accidents and injuries but we’d probably also eliminate many unsafe drivers from the road while placing a forcing function on public transit to increase usage. We’d probably also see a knock on effect to the training market and increase employment in that field. We might also see an increase in tax/fee income for government entities to help reduce reliance on gasoline taxes. Obviously I haven’t done a rigorous analysis but it would seem like a win all around in my humble opinion.
I just don't get this situation. With Autopilot, it takes a tiny amount of pressure on the steering wheel to take over. The pressure is so small that, when Autopilot jerks the wheel, you're more likely to turn it off.
So, assuming the driver was paying attention, with his hands on the wheel, this makes no sense. The only way the accident makes sense is if the driver fell asleep, wasn't paying attention, or accidentally turned off autopilot.
And falling asleep, or not paying attention, is a real risk in any car.
(Note: What I remember from older articles about this topic is that the car was nagging him for awhile to put his hands on the wheel.)
I’ve been using commaai EON for a while and very happy with it. They’ve done the “what it knows, what it knows it doesn’t know” very well.
The system will beep hard at you when the vision system doesn’t have confidence. It only has confidence in well lit, well marked roads. Otherwise it yells at human to take over.
Take eyes off the road, it yells at you. The disengagement rate is pretty high but I like it. I know it does 405 and I5 highways well and that’s where I use it.
Easy to understand the limitations. They also make you explicitly say yes to all the limitations on first start.
I'm not convinced we should entirely blame Tesla, given that humans have been crashing into the same barrier:
> In the three years before the Tesla crash, the device was struck at least five times, including one crash that resulted in fatalities. A car struck it again on May 20, 2018, about two months after the Tesla crash, the NTSB said.
The article also says the engineer had complained about his Tesla veering towards this particular barrier. I don't understand why he still relied on the autopilot while driving past it.
This is from 2018 and the driver didn't even have his hands on the wheel, though he knew there were problems at that spot. Very careless driving. Personally I wouldn't use a mobile game while I'm driving a vehicle, nevermind software in development.
I have a car with the "autopilot" feature like the Tesla, but they call it adaptive cruise control and lane assist. The adaptive cruise control works pretty well; I guess it only needs to check the radar to see what's in front to slow down. The lane assist is problematic in recognizing different kind of lanes and would steer off course from time to time. I guess the current state of "autopilot" technology is simply not there.
[+] [-] fxtentacle|6 years ago|reply
I wonder if that is just me, but when new technology is introduced and hyped, I usually take a quick look at implementations, research, and talks just to get an idea of what the state of the art really is like.
As a result, I have become the late adopter among my group of friends because the first iteration of any new technology usually just isn't worth the issues.
You know this effect when you open a new book and immediately spot a typo? I felt that way looking at state of the art AI vision papers.
The first paper would crash, despite me using the same GPU as the authors. Turns out they got incredibly lucky not to trigger a driver bug causing random calculation errors.
The second paper converted float to bool and then tried to use the gradient for training. That's just plain mathematically wrong, a step function doesn't have a non-zero gradient.
The third paper only used a 3x3 pixel neighborhood for learning long-distance moves. Doesn't work, I cannot learn about New York by waking around in my bathroom.
That gave me the gut feeling that most people doing the research were lacking the necessary mathematical background. AI is stochastical gradient descent optimization, after all.
Thanks to TensorFlow, it is nowadays easy to try out other people's AI. So I took some photos of my road and put them through the state of the art computer visible AIs trained with KITTY, a self-driving car dataset of German roads. All of them couldn't even track the wall of the house correctly.
So now I'm afraid to use anything self-driving ^_^
[+] [-] salimmadjd|6 years ago|reply
I remember going with a friend on a Tesla test drive. He was eager to try the Autopilot system. And within a few minutes we got freaked out with what seem to us as unpredictable action and we turned it off.
I think the engineer should have never relied on it. Especially if he had concerns with the system.
At the same time, "Autopilot" is misleading and Tesla does own responsibility here. Even if it's purely a marketing term it's designed to up sell you on a half-baked technology.
They should call it "Driver assist" or something like "cruise control plus".
It's a heartbreaking story.
[+] [-] PopeRigby|6 years ago|reply
[+] [-] perl4ever|6 years ago|reply
On the one hand, I'm affected by the FUD and will not buy a Tesla, and perhaps also will have schadenfreude to the extent their CEO has misfortunes.
On the other hand, to be fair, other driver assistance features are creating problems too. I think it was Subaru I was just reading about having serious problems (and a recall) where automatic braking triggers inappropriately and may cause an accident.
People have incredible faith in software and technology, and that's fine and all, but I wonder what people like that hang around HN for.
[+] [-] keanzu|6 years ago|reply
https://www.youtube.com/watch?v=RxeK0F-D3gg
[+] [-] r_singh|6 years ago|reply
Exactly, similar tech has been named Adaptive Cruise Control since the 90s.
[+] [-] inviromentalist|6 years ago|reply
They are marketing primarily, function second.
My biggest concern is that since the company isn't profitable(?) That these highly risky things are acceptable. Other profitable auto companies cannot be this wreckless.
It's not like they can go after Personal assets.
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] jerome-jh|6 years ago|reply
[+] [-] bagacrap|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] m00x|6 years ago|reply
https://en.wikipedia.org/wiki/Autopilot
> Autopilots do not replace human operators, but instead they assist them in controlling the vehicle.
Names don't really mean anything, it could have been named anything. If people think it's 99% reliable and it's convenient, they will use it.
[+] [-] agumonkey|6 years ago|reply
[+] [-] s_y_n_t_a_x|6 years ago|reply
[deleted]
[+] [-] bananabreakfast|6 years ago|reply
Autopilot on planes does not mean the plane is self-flying and the pilot doesn't have to pay attention anymore to take over at a moment's notice. Why would you think it's suddenly different in a car?
[+] [-] danielinoa|6 years ago|reply
"Data from his phone indicates that a mobile game was active during his drive and that his hands were not on the wheel during the six seconds ahead of the crash."
[+] [-] spookthesunset|6 years ago|reply
[+] [-] mrtksn|6 years ago|reply
How fair is to say that the driver should have had their hands on the wheel at all time and checked and acquired a pilot license or at least studied what is autopilot?
Does Tesla's have driver attention detectors and warning systems? Is the car beeping or something like that when you don't hold the wheel?
[+] [-] siljon|6 years ago|reply
Which is how Tesla want to treat it according to the article: "Tesla says Autopilot is intended to be used for driver assistance and that drivers must be ready to intervene at all times."
[+] [-] lawnchair_larry|6 years ago|reply
And that’s the problem with an “autopilot 99.9% of the time, but still pay attention just in case” feature.
[+] [-] ReptileMan|6 years ago|reply
As an engineer - he should have known better than to continue to use the feature the moment it became unreliable for reasons unknown. Those kind of failure modes are extremely devious killers.
[+] [-] bmat|6 years ago|reply
And FWIW I've never felt misled about Autopilot's capabilities. I started off skeptical and it's since earned my partial trust in limited scenarios. Ironically its hiccups actually make me more attentive since, ya know, I don't want it to kill me.
[+] [-] aembleton|6 years ago|reply
[+] [-] lostmsu|6 years ago|reply
[+] [-] hvidgaard|6 years ago|reply
[+] [-] crazygringo|6 years ago|reply
Quite a few commenters insisted the issue was a non-issue -- that all datasets are noisy and mislabeling occurs all the time, that nobody's building actual self-driving cars from it, that surely if an object isn't labeled in one frame it'll be labeled in the next.
Now obviously we don't what the cause of this particular crash was.
But I will say that I found people's willingness to defend widespread sloppy labeling in training sets used for literal high-speed life and death situations rather shocking.
And that, hopefully, crashes like this serve to remind us of the far greater responsibility we have with regards to quality and accuracy when it comes to building ML models for self-driving cars, rather than when we're merely predicting how likely a credit card customer is to pay their next bill on time, or which ad is likely to be most profitable.
[1] https://news.ycombinator.com/item?id=22298882
[+] [-] bagacrap|6 years ago|reply
The only player in autonomous vehicles that doesn't understand the gravity of the technology is Tesla/Musk.
[+] [-] harrisonjackson|6 years ago|reply
Anyone who has driven with autopilot, how quickly might it react to a perceived obstacle? Would it take a hard turn into a median faster than a person could reasonably react if it thought there was something in the road or that the road took a hard left?
Edit Ready to intervene can mean a lot of things - ready to take over in traffic or inclement weather vs firm grip on the wheel to resist rogue hard turns
Actual quote from the article
>Tesla says Autopilot is intended to be used for driver assistance and that drivers must be ready to intervene at all times.
[+] [-] throwaway5752|6 years ago|reply
It shows Teslas have much fewer accidents than the average car, and that Teslas with autopilot enabled are substantially safe per mile than with autopilot disabled (25-50% more miles driven per accident).
The article says a Prius hit the same place as this Tesla the week before. But the NTSB isn't investigating Toyota.
[+] [-] aerovistae|6 years ago|reply
To be clear, I think Tesla has been cavalier and careless in their approach to autopilot.
But Walter Huang knew what he was doing and did it anyway. His death is largely by his own hand. I don't see this as different from people who knowingly take other careless risks and pay a price.
[+] [-] mixmastamyk|6 years ago|reply
[+] [-] smallstepforman|6 years ago|reply
[+] [-] WilliamEdward|6 years ago|reply
[+] [-] thbr99|6 years ago|reply
[+] [-] adaisadais|6 years ago|reply
California is rich in capital but so poor infrastructure. Moving here from South Carolina(where taxes are much lower and roads are paltry) I was shocked at how dismal a state many of the roads are and at the sheer amount of time it takes to complete.
Patrick Collison of Stripe actually wrote a blog piece(1) condemning SF for their lack of speed when it comes to road maintenance.
There is no end in sight. Until we we a nation hold our governments accountable for road maintenance and proper engineering we will never have any great resolution.
(1) https://patrickcollison.com/fast
[+] [-] curiousgal|6 years ago|reply
It was a left-hand exit (why are those even a thing?).
The seperator did not have contracting protectors because a car had previously crashed into them, I wonder why.
No one ever dies in car accidents.
Hence, Tesla is super bad for marketing its autopilot according to HN.
[+] [-] toast0|6 years ago|reply
At this interchange, I've seen lots of human drivers change between the exiting HOV lane and the continuing HOV lane (or vice versa) much later than is safe; presumably because they were surprised that their lane was exiting, or they noticed the exit too late, but didn't want to miss it. I haven't seen anyone drive in between the two lanes as if it was a lane, though; and usually the late lane changers are braking, not accelerating.
It is unfortunate that the crash attenuator wasn't reset. Looking at the docs for a similar attenuator[1], it seems like resetting is somewhat involved, and you'd need a trained and properly equipped repair crew to do it, even if the time required is not that much. Scheduling is probably an issue.
[1] https://www.dmtraffic.com/assets/sci_smart_cushion_design_an... page 9-13
[+] [-] voodooranger|6 years ago|reply
[+] [-] StylusEater|6 years ago|reply
[+] [-] gwbas1c|6 years ago|reply
I just don't get this situation. With Autopilot, it takes a tiny amount of pressure on the steering wheel to take over. The pressure is so small that, when Autopilot jerks the wheel, you're more likely to turn it off.
So, assuming the driver was paying attention, with his hands on the wheel, this makes no sense. The only way the accident makes sense is if the driver fell asleep, wasn't paying attention, or accidentally turned off autopilot.
And falling asleep, or not paying attention, is a real risk in any car.
(Note: What I remember from older articles about this topic is that the car was nagging him for awhile to put his hands on the wheel.)
[+] [-] nojvek|6 years ago|reply
The system will beep hard at you when the vision system doesn’t have confidence. It only has confidence in well lit, well marked roads. Otherwise it yells at human to take over.
Take eyes off the road, it yells at you. The disengagement rate is pretty high but I like it. I know it does 405 and I5 highways well and that’s where I use it.
Easy to understand the limitations. They also make you explicitly say yes to all the limitations on first start.
[+] [-] DennisP|6 years ago|reply
> In the three years before the Tesla crash, the device was struck at least five times, including one crash that resulted in fatalities. A car struck it again on May 20, 2018, about two months after the Tesla crash, the NTSB said.
The article also says the engineer had complained about his Tesla veering towards this particular barrier. I don't understand why he still relied on the autopilot while driving past it.
[+] [-] Dumblydorr|6 years ago|reply
[+] [-] ww520|6 years ago|reply