Tesla is using a modified Nvidia self-driving platform too for Autopilot v2, available in early 2017.
Nvidia demonstrated their platform with many sensors like 10 cameras, ultra-sonic sensors and at least one LIDAR.
Where as Tesla is using a sensor mix with radar, ultra-sonic sensors and like 7 cameras - the LIDAR is probably still too expensive even for a 70+k car (the big one you know from Google cars cost $70k, the smallest one cost at least 7k).
It will be interesting to learn about Audi's sensor mix, and what LIDAR product they choose.
A LIDAR is available off Amazon as we speak for < $500 and that's end user price. I won't link because I am not spamming but B01L1T32PI. Surely it's not the bees knees but not $7K for sure.
The term "self-driving" in this context has no technical meaning behind it. Cruise control is also self-driving. And don't get me started on "AI". Jen-Hsun is saying that the car "was trained for four days". WTF? Mobileye has a team of 100s of annotators working 9-to-5 generating training data. It's as if this report is tailored to fool credulous readers who have a vision of HAL 9000 driving a car in their mind.
The article specifies Level 4 autonomy, which means:
The automated system can control the vehicle in all but a few environments such as severe weather. The driver must enable the automated system only when it is safe to do so. When enabled, driver attention is not required.
I think that's happening already, among engineers at least.
There should be some healthy dose of inspiring "propaganda", but this got out of hand - everybody claims "they have it".
I am so sick of this endless stream of lies, that I'm not even going to read the article. The next thing I am going to read on the topic would read something like "X has a viable self-driving car - it's hitting the market later this year.".
Random thought: the more deep learning is used in training, the less humans will be able to retroactively explain decisions; this surely has liability implications
NuTonomy has an autonomous driving technology based on formal logic [1]. It seems like formal logic is a better approach for retroactively explaining software decisions.
I don't see how this is true. With a machine system you would at least have the ability to log the mathematical operations and results, which would be complex, sure.
But more complex than figuring out if another person is lying about a very nuanced and opaque decision they made?
I think that Tesla is basically saying 'F it' and releasing something like this either right now (version 2.0) or full version before end of 2017.
But they have more-or-less been doing that the whole time. Its just now they have more sensors and deep learning so they are going to be autonomous a higher percentage of the time.
So I think as soon as they start rolling it out more and more Tesla owner will have more common 100% autonomous trips with some exceptions for weird traffic or weather.
I think this is risky in some ways but overall its more ethical than delaying because the only way to train/engineer for the exceptional situations is to get a lot of vehicles running the system and training on data. Waiting a few years means people die from human error and you're unlikely to see massive improvements to the system that would make up for that.
One thing people will realize eventually is that we create a lot of driving situations that are structurally unsafe. For example, it is accepted to speed past pedestrians or bicyclists a few feet away on the sidewalk or bike lane. No level of AI advancement can prevent some random horrific accidents in that case. Could be as simple as a pedestrian crossingthe street a little early. People are not going to tolerate AIs going 5 mph anytime a pedestrian is nearby but thats the only way you could prevent fatalities in some situations. That is part of the 'low confidence situations' the nvidia guy mentioned. So actually we need laws to protect autonomous tech companies in those situations or that will delay situational deployment and lead to more human error deaths.
I know lawyers and states need time to legalise the paperwork around driverless, but I feel like 2020 is simply to appease the auto companies and give them another few years to stall.
Why is everyone so damn cynical about self-driving cars?
We know this is coming, and this technology will improve the lives of so many people in the long run. Maybe it's from Nvidia and Audi, maybe Tesla, Uber, Google, that dude who launched and failed and ran away to China, who knows?
I'm excited to think about what opportunities will start to open up once humans don't need to spend 2+ hrs a day with their hands on the wheel :)
"Audi and NVIDIA developed an Audi Q7 piloted driving concept vehicle, which uses neural networks and end-to-end deep learning. Demo track at CES 2017 in Las Vegas."
The scary thing about non-ad hoc techniques is that deep net is a "black box" -- you really don't know how pathologies occurred nor do you know how to fix them.
Not only that, there are _inherent_ pathologies associated with using deep nets in the first place.
DB: So, control theory is model-based and concerned with worst case. Machine learning is data based and concerned with average case. Is there a middle ground?
BR: I think there is! And I think there's an exciting opportunity here to understand how to combine robust control and reinforcement learning. Being able to build systems from data alone simplifies the engineering process, and has had several recent promising results. Guaranteeing that these systems won't behave catastrophically will enable us to actually deploy machine learning systems in a variety of applications with major impacts on our lives. It might enable safe autonomous vehicles that can navigate complex terrains. Or could assist us in diagnostics and treatments in health care. There are a lot of exciting possibilities, and that's why I'm excited about how to find a bridge between these two viewpoints.
For instance, the network might only be used to decide among a series of actions, and those actions can still have limits (such as “car cannot travel faster than X” or “Y cannot change more than 3 times per minute”, or whatever). There is still an abundance of attention put into safety, as usual for the auto industry. It isn’t just a brain hooked up to an engine that is allowed to run rampant.
An alien race landed on earth and demands to play a game of Go. We only get to play one game with them. If they win, our planet is destroyed.
Who would you trust to play for the human race if this scenario happened tomorrow? Lee Sedol or AlphaGo? Remember that we do not completely understand how AlphaGo reasons, it is still a black box to us.
... and soon the trend'll be 1 car company owned by a tech company. Wondering when the first tech companies are going to start buying car companies -- in the tried and true spirit of tech (software) eating the world.
[+] [-] frik|9 years ago|reply
Nvidia demonstrated their platform with many sensors like 10 cameras, ultra-sonic sensors and at least one LIDAR.
Where as Tesla is using a sensor mix with radar, ultra-sonic sensors and like 7 cameras - the LIDAR is probably still too expensive even for a 70+k car (the big one you know from Google cars cost $70k, the smallest one cost at least 7k).
It will be interesting to learn about Audi's sensor mix, and what LIDAR product they choose.
[+] [-] lucidrains|9 years ago|reply
[+] [-] forgetsusername|9 years ago|reply
One claim versus another, both optimistic. I'm not going to believe Audi or Tesla until I see it.
[+] [-] chx|9 years ago|reply
[+] [-] Vik1ng|9 years ago|reply
Enhanced Autopilot available mid December 2016... oh wait.
[+] [-] ilaksh|9 years ago|reply
[+] [-] KKKKkkkk1|9 years ago|reply
[+] [-] jayjay71|9 years ago|reply
The automated system can control the vehicle in all but a few environments such as severe weather. The driver must enable the automated system only when it is safe to do so. When enabled, driver attention is not required.
[+] [-] option|9 years ago|reply
[+] [-] amenod|9 years ago|reply
[+] [-] thinkloop|9 years ago|reply
[+] [-] tsenkov|9 years ago|reply
There should be some healthy dose of inspiring "propaganda", but this got out of hand - everybody claims "they have it".
I am so sick of this endless stream of lies, that I'm not even going to read the article. The next thing I am going to read on the topic would read something like "X has a viable self-driving car - it's hitting the market later this year.".
[+] [-] Traubenfuchs|9 years ago|reply
If those were true our phones and notebooks would have power until eternity already.
[+] [-] peteretep|9 years ago|reply
[+] [-] bobcostas55|9 years ago|reply
[+] [-] javiramos|9 years ago|reply
[1] http://spectrum.ieee.org/transportation/self-driving/after-m...
[+] [-] wavefunction|9 years ago|reply
But more complex than figuring out if another person is lying about a very nuanced and opaque decision they made?
[+] [-] catwell|9 years ago|reply
[+] [-] whistlerbrk|9 years ago|reply
[+] [-] joeyspn|9 years ago|reply
Nvidia stock (NASDAQ:NVDA) is x4 in the last year and probably will continue escalating..
[+] [-] dbcooper|9 years ago|reply
http://www.zf.com/corporate/en_de/press/list/release/release...
[+] [-] ilaksh|9 years ago|reply
But they have more-or-less been doing that the whole time. Its just now they have more sensors and deep learning so they are going to be autonomous a higher percentage of the time.
So I think as soon as they start rolling it out more and more Tesla owner will have more common 100% autonomous trips with some exceptions for weird traffic or weather.
I think this is risky in some ways but overall its more ethical than delaying because the only way to train/engineer for the exceptional situations is to get a lot of vehicles running the system and training on data. Waiting a few years means people die from human error and you're unlikely to see massive improvements to the system that would make up for that.
One thing people will realize eventually is that we create a lot of driving situations that are structurally unsafe. For example, it is accepted to speed past pedestrians or bicyclists a few feet away on the sidewalk or bike lane. No level of AI advancement can prevent some random horrific accidents in that case. Could be as simple as a pedestrian crossingthe street a little early. People are not going to tolerate AIs going 5 mph anytime a pedestrian is nearby but thats the only way you could prevent fatalities in some situations. That is part of the 'low confidence situations' the nvidia guy mentioned. So actually we need laws to protect autonomous tech companies in those situations or that will delay situational deployment and lead to more human error deaths.
[+] [-] ParadisoShlee|9 years ago|reply
[+] [-] zxcvvcxz|9 years ago|reply
We know this is coming, and this technology will improve the lives of so many people in the long run. Maybe it's from Nvidia and Audi, maybe Tesla, Uber, Google, that dude who launched and failed and ran away to China, who knows?
I'm excited to think about what opportunities will start to open up once humans don't need to spend 2+ hrs a day with their hands on the wheel :)
[+] [-] lucidrains|9 years ago|reply
"Audi and NVIDIA developed an Audi Q7 piloted driving concept vehicle, which uses neural networks and end-to-end deep learning. Demo track at CES 2017 in Las Vegas."
[+] [-] remir|9 years ago|reply
[+] [-] Hydraulix989|9 years ago|reply
Not only that, there are _inherent_ pathologies associated with using deep nets in the first place.
[+] [-] matthewmarkus|9 years ago|reply
BR: I think there is! And I think there's an exciting opportunity here to understand how to combine robust control and reinforcement learning. Being able to build systems from data alone simplifies the engineering process, and has had several recent promising results. Guaranteeing that these systems won't behave catastrophically will enable us to actually deploy machine learning systems in a variety of applications with major impacts on our lives. It might enable safe autonomous vehicles that can navigate complex terrains. Or could assist us in diagnostics and treatments in health care. There are a lot of exciting possibilities, and that's why I'm excited about how to find a bridge between these two viewpoints.
https://www.oreilly.com/ideas/machine-learning-in-the-wild
[+] [-] makecheck|9 years ago|reply
For instance, the network might only be used to decide among a series of actions, and those actions can still have limits (such as “car cannot travel faster than X” or “Y cannot change more than 3 times per minute”, or whatever). There is still an abundance of attention put into safety, as usual for the auto industry. It isn’t just a brain hooked up to an engine that is allowed to run rampant.
[+] [-] foglerek|9 years ago|reply
[+] [-] lucidrains|9 years ago|reply
An alien race landed on earth and demands to play a game of Go. We only get to play one game with them. If they win, our planet is destroyed.
Who would you trust to play for the human race if this scenario happened tomorrow? Lee Sedol or AlphaGo? Remember that we do not completely understand how AlphaGo reasons, it is still a black box to us.
[+] [-] deepnotderp|9 years ago|reply
/s
[+] [-] k__|9 years ago|reply
You're welcome ;)
[+] [-] visarga|9 years ago|reply
[+] [-] slaunchwise|9 years ago|reply
[+] [-] nameisu|9 years ago|reply
[+] [-] seeekr|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]