top | item 13324655

Nvidia and Audi aim to bring a self-driving AI car to market by 2020

215 points| t23 | 9 years ago |techcrunch.com

133 comments

order
[+] frik|9 years ago|reply
Tesla is using a modified Nvidia self-driving platform too for Autopilot v2, available in early 2017.

Nvidia demonstrated their platform with many sensors like 10 cameras, ultra-sonic sensors and at least one LIDAR.

Where as Tesla is using a sensor mix with radar, ultra-sonic sensors and like 7 cameras - the LIDAR is probably still too expensive even for a 70+k car (the big one you know from Google cars cost $70k, the smallest one cost at least 7k).

It will be interesting to learn about Audi's sensor mix, and what LIDAR product they choose.

[+] forgetsusername|9 years ago|reply
>Tesla...available in early 2017.

One claim versus another, both optimistic. I'm not going to believe Audi or Tesla until I see it.

[+] chx|9 years ago|reply
A LIDAR is available off Amazon as we speak for < $500 and that's end user price. I won't link because I am not spamming but B01L1T32PI. Surely it's not the bees knees but not $7K for sure.
[+] Vik1ng|9 years ago|reply
> available in early 2017.

Enhanced Autopilot available mid December 2016... oh wait.

[+] ilaksh|9 years ago|reply
I believe they are using 'HD cloud maps', RADAR, and deep learning vision processing _instead_ of LIDAR.
[+] KKKKkkkk1|9 years ago|reply
The term "self-driving" in this context has no technical meaning behind it. Cruise control is also self-driving. And don't get me started on "AI". Jen-Hsun is saying that the car "was trained for four days". WTF? Mobileye has a team of 100s of annotators working 9-to-5 generating training data. It's as if this report is tailored to fool credulous readers who have a vision of HAL 9000 driving a car in their mind.
[+] jayjay71|9 years ago|reply
The article specifies Level 4 autonomy, which means:

The automated system can control the vehicle in all but a few environments such as severe weather. The driver must enable the automated system only when it is safe to do so. When enabled, driver attention is not required.

[+] option|9 years ago|reply
4 days as in training time given all the compute resources they could allocate to the problem
[+] amenod|9 years ago|reply
Exactly. Who cares if they have trained it for only 4 days? That doesn't mean it would get twice as good (or even any better) in 8 days' training.
[+] thinkloop|9 years ago|reply
Wonder how many more times this can get declared before the PR value works as a net loss.
[+] tsenkov|9 years ago|reply
I think that's happening already, among engineers at least.

There should be some healthy dose of inspiring "propaganda", but this got out of hand - everybody claims "they have it".

I am so sick of this endless stream of lies, that I'm not even going to read the article. The next thing I am going to read on the topic would read something like "X has a viable self-driving car - it's hitting the market later this year.".

[+] Traubenfuchs|9 years ago|reply
It's just like battery innovations.

If those were true our phones and notebooks would have power until eternity already.

[+] peteretep|9 years ago|reply
Random thought: the more deep learning is used in training, the less humans will be able to retroactively explain decisions; this surely has liability implications
[+] bobcostas55|9 years ago|reply
Perhaps NNs need functionality to come up with fake rationalizations for their decisions just like the human brain does.
[+] wavefunction|9 years ago|reply
I don't see how this is true. With a machine system you would at least have the ability to log the mathematical operations and results, which would be complex, sure.

But more complex than figuring out if another person is lying about a very nuanced and opaque decision they made?

[+] whistlerbrk|9 years ago|reply
I feel like this is very actively being addressed, for example by visualization of intermediate features.
[+] joeyspn|9 years ago|reply
Bitcoin and cryptocurrencies, AI, self-driving cars... good (and profitable) times for GPU manufacturers.

Nvidia stock (NASDAQ:NVDA) is x4 in the last year and probably will continue escalating..

[+] ilaksh|9 years ago|reply
I think that Tesla is basically saying 'F it' and releasing something like this either right now (version 2.0) or full version before end of 2017.

But they have more-or-less been doing that the whole time. Its just now they have more sensors and deep learning so they are going to be autonomous a higher percentage of the time.

So I think as soon as they start rolling it out more and more Tesla owner will have more common 100% autonomous trips with some exceptions for weird traffic or weather.

I think this is risky in some ways but overall its more ethical than delaying because the only way to train/engineer for the exceptional situations is to get a lot of vehicles running the system and training on data. Waiting a few years means people die from human error and you're unlikely to see massive improvements to the system that would make up for that.

One thing people will realize eventually is that we create a lot of driving situations that are structurally unsafe. For example, it is accepted to speed past pedestrians or bicyclists a few feet away on the sidewalk or bike lane. No level of AI advancement can prevent some random horrific accidents in that case. Could be as simple as a pedestrian crossingthe street a little early. People are not going to tolerate AIs going 5 mph anytime a pedestrian is nearby but thats the only way you could prevent fatalities in some situations. That is part of the 'low confidence situations' the nvidia guy mentioned. So actually we need laws to protect autonomous tech companies in those situations or that will delay situational deployment and lead to more human error deaths.

[+] ParadisoShlee|9 years ago|reply
I know lawyers and states need time to legalise the paperwork around driverless, but I feel like 2020 is simply to appease the auto companies and give them another few years to stall.
[+] zxcvvcxz|9 years ago|reply
Why is everyone so damn cynical about self-driving cars?

We know this is coming, and this technology will improve the lives of so many people in the long run. Maybe it's from Nvidia and Audi, maybe Tesla, Uber, Google, that dude who launched and failed and ran away to China, who knows?

I'm excited to think about what opportunities will start to open up once humans don't need to spend 2+ hrs a day with their hands on the wheel :)

[+] lucidrains|9 years ago|reply
Video: https://www.youtube.com/watch?v=7jS4AuPnmyg

"Audi and NVIDIA developed an Audi Q7 piloted driving concept vehicle, which uses neural networks and end-to-end deep learning. Demo track at CES 2017 in Las Vegas."

[+] remir|9 years ago|reply
I wonder how it compares to Tesla Vision.
[+] Hydraulix989|9 years ago|reply
The scary thing about non-ad hoc techniques is that deep net is a "black box" -- you really don't know how pathologies occurred nor do you know how to fix them.

Not only that, there are _inherent_ pathologies associated with using deep nets in the first place.

[+] matthewmarkus|9 years ago|reply
DB: So, control theory is model-based and concerned with worst case. Machine learning is data based and concerned with average case. Is there a middle ground?

BR: I think there is! And I think there's an exciting opportunity here to understand how to combine robust control and reinforcement learning. Being able to build systems from data alone simplifies the engineering process, and has had several recent promising results. Guaranteeing that these systems won't behave catastrophically will enable us to actually deploy machine learning systems in a variety of applications with major impacts on our lives. It might enable safe autonomous vehicles that can navigate complex terrains. Or could assist us in diagnostics and treatments in health care. There are a lot of exciting possibilities, and that's why I'm excited about how to find a bridge between these two viewpoints.

https://www.oreilly.com/ideas/machine-learning-in-the-wild

[+] makecheck|9 years ago|reply
It doesn’t have to be all or nothing.

For instance, the network might only be used to decide among a series of actions, and those actions can still have limits (such as “car cannot travel faster than X” or “Y cannot change more than 3 times per minute”, or whatever). There is still an abundance of attention put into safety, as usual for the auto industry. It isn’t just a brain hooked up to an engine that is allowed to run rampant.

[+] foglerek|9 years ago|reply
As do humans (see the various cognitive biases). The question is: does the AI's black box result in a safer outcome?
[+] lucidrains|9 years ago|reply
Fun thought experiment.

An alien race landed on earth and demands to play a game of Go. We only get to play one game with them. If they win, our planet is destroyed.

Who would you trust to play for the human race if this scenario happened tomorrow? Lee Sedol or AlphaGo? Remember that we do not completely understand how AlphaGo reasons, it is still a black box to us.

[+] deepnotderp|9 years ago|reply
Definitely, quite scary, keeps me up at night. At least I know EXACTLY what my taxi driver is thinking....

/s

[+] k__|9 years ago|reply
And all this because of gamers.

You're welcome ;)

[+] visarga|9 years ago|reply
And the guys with the NNs.
[+] slaunchwise|9 years ago|reply
Audi? I'd be happy if they could build a car with a damned USB port and a touch screen.
[+] nameisu|9 years ago|reply
lately every few weeks the trend in news is 1 tech company+1 car company
[+] seeekr|9 years ago|reply
... and soon the trend'll be 1 car company owned by a tech company. Wondering when the first tech companies are going to start buying car companies -- in the tried and true spirit of tech (software) eating the world.