More info: Tesla's radar is a Bosch device. Either the Bosch LRR4, which is several years old, or the Bosch mid-range radar. Bosch has made about 10 million of these and related models. Tesla is not the only customer.
This device isn't enough for a full point cloud. It doesn't scan in elevation, just azimuth. Some variants do have an upward-pointing beam in additional to the main forward beam, which is about 5 degrees in vertical.
There are automotive radars which scan in 3D[2], but Tesla's is not one of them.
Small radars are rather blunt instruments. You tend to get one point for each target, not lots of points. The beam focus isn't that tight. Tighter focus requires a larger antenna array.
I believe you, but I think that misses the point. My takeaway here is that Tesla is just getting started with Autopilot.
Despite the hype, this blog post makes it clear that Autopilot is just a very simple camera based system that requires human monitoring (but is still very useful in my personal experience). Now they are releasing a software update that uses the previously unused radar hardware as well as machine learning to achieve better results. This is exciting because they're going to keep releasing software updates over the air and the software is just going to keep getting better and better.
The fact that the hardware is shitty doesn't concern me. Google's self driving car project depends on a Lidar system that costs $75,000 today. That's almost four times what the average car costs, and it doesn't work in bad weather. Regardless of how good Google's software is it could mean nothing if Tesla acheives a similar or better result using cheap commodity hardware. The hardware will only get better over time at the same price.
Tesla is in a better position to bring full autonomy to market than anyone else, since they control the hardware, the software, have cars in the field etc. For this reason I wouldn't be surprised if Tesla becomes the first company to break a market capitalization of $1 trillion dollars. Computers gaining the ability to move around the world with drones and autononomous driving will have an economic impact bigger than the introduction of the internet. Where we are now is meaningless. What matters is that Musk has stated the goal and we have something in the field today that can be updated iteratively over time until it's perfect (which will be never). We may not be very far along this journey, but we've taken those hardest first steps towards the next big technological revolution that will once again change everything about how humans live, the implications of which we can't even begin to imagine.
> "The update will also penalize inattentive drivers. If the car determines that the driver doesn't have their hands on the wheel and throws its audible warning three times in an hour, it will lock the driver out of the feature. In order to re-enable Autopilot, the car will have to be pulled over and put in park."
Reminder that proponents of autopilot systems wage the utilitarian argument that if self driving cars were even slightly better at keeping people alive behind the wheel than human drivers, then it is worthwhile to keep them on the road. Tesla's implementation didn't have these safety features that other manufacturers had in their assisted cruise control systems and people died. A few disgruntled users trumps unneeded deaths caused by a false sense of confidence in Tesla's autopilot.
As a Tesla owner I always keep a hand loosely on the wheel and I never get a warning. In fact the only time I've seen the warning was when I deliberately kept my hands off to get a feel for when the warning occurs and what it looks and sounds like. I still haven't been brave enough to let the car trigger the fail safe "come to a stop" scenario when ignoring multiple warnings, I also wonder if doing so might cause a black mark against my Tesla account ;)
Tesla need to show the regulators that this technology is still driver assist. They need to prove that they are doing reasonable effort to remind the drive to pay attention.
I keep my hands off the wheel all the time and it hardly ever gives me the warning. And the blog post says that it only stops you if you ignore the warnings i.e. Presumably don't put your hands on the wheel. In reality it is pretty nice to relax in the car and put your hands down while still paying enough attention that you can instantly grab the wheel if nessecary. But yes, it does worry me that this might be annoying.
Entire point of having autopilot is not to have hands on wheel! I really don't think it would serve any purpose other than legal caveats. Even if your hands on the wheels it takes much longer to reacts if something bad happens. I hope this is just because its early days of self-driving so we can collect some test data and improve the system to the point that it is actually self-driving.
Is it just me or does whitelisting static objects to determine whether the car will collide with them seem like a bit of a crude hack? It almost sounds like the system will brake at newly installed traffic signs.
edit: Upon closer reading, he explains it somewhat. Once they have enough data, the system will start braking on unknown objects with gradually increasing force as the confidence level rises. So basically, it will brake on unknown traffic signs but only slightly, as the confidence level shouldn't get too high, if I understand that correctly.
The last paragraph sounds technically challenging and interesting:
"Taking this one step further, a Tesla will also be able to bounce the radar signal under a vehicle in front - using the radar pulse signature and photon time of flight to distinguish the signal - and still brake even when trailing a car that is opaque to both vision and radar. The car in front might hit the UFO in dense fog, but the Tesla will not."
edit: it seems like they are already doing it beginning with this update: "Now controls for two cars ahead using radar echo, improving cut-out response and reaction time to otherwise-invisible heavy braking events". that sounds awesome.
Self-driving cars are entirely reliant upon mapping data for all sorts of functionality. For example, how is it supposed to differentiate between curved roads and on/off ramps? Decent navigational maps require huge amounts of manual intervention, which is why Apple's mapping software initially sucked. The maps for self-driving cars will require an order-of-magnitude more data.
Yes, this seems really complex to me, too. I'm also not quite sure on why exactly it is so complex to distinguish between objects (including vehicles) on the road and ones above/next to it.
If they have radar images (I imagine them as images with depth information, which might be fundamentally wrong), they should be able to tell both where the road is going and, with that information, which of these objects are of relevance.
But for the learning part, they probably combine the camera that they used to date as their primary device in combination with the radar (at least in daylight scenarios) to identify objects. They may even be able to learn about the special material properties, like the reflective coating of a traffic sign.
The author of the article is a nameless collective, not an individual. They explain it, but there is no "he" to ascribe the explanation to. We do not know the names of the authors.
Yup. As soon as you're white/blacklisting you've lost at AI because you're assuming the rules never change, and if that's the case then why use a learning AI in the first place.
I wonder how many fatalities it'll take before they realise that 'the driver should've been paying attention' isn't a good enough excuse.
I think you have it backwards. The radar will not initiate braking events for newly discovered stationary objects. Once it has human feedback from multiple sources, the object will be whitelisted and cause braking events.
blacklisting would cause the behaviour you describe.
> The car computer will then silently compare when it would have braked to the driver action and upload that to the Tesla database...whether Autopilot is turned on or off, then that object is added to the geocoded whitelist.
(emphasis mine)
This has interesting privacy implications. I am not a Tesla owner, but I imagine that by enabling Autopilot you consent to providing Tesla with diagnostic, error, and sensor data. But what about those who have not enabled this feature? Their Tesla will automatically phone home with data regarding their location and surroundings regardless of whether or not they have consented to this?
I was really fascinated by that comment as well, but not for privacy reasons, I believe that while it would be able to peruse server logs at Tesla to understand where a particular car was at a particular time, that is no worse than OnStar or current phone GPS tracking.
The interesting thing is the data set of watching humans drive and using models to drive for the same place. This only works if the "place" is not notably different from the model, say a semi has hit the overhead and its now hanging into the roadway, can the car distinguish between a sign hanging sideways and one that is attached normally?
Severe storms and down power lines is another interesting question. Does autopilot recognize the environment has been grossly modified and refuse to drive? Earthquakes, tornadoes, floods, all can grossly change the environment at a particular geocoded location.
What if a Tesla owner's club decides to use a piece of highway 58 out in Nevada as a race strip? Does autopilot assume that when you hit this point you are supposed to stomp the accelerator and go as fast as you can? (ok that is a stretch)
It's the data without the knowledge. Something machine learning is bad at (hence turning chat bots into vitriol spewing fascists). VERY interesting times.
I feel like Tesla is a super creepy company that people aren't giving enough scrutiny to just because their tech is so great. What was that story about them remotely locking out a hacker from snooping around in their software? What about all those articles where they do an uncomfortably accurate play-by-play of someone's accident? I'm not comfortable with the idea of a company having that much access to private data.
What surprises me is how (at least here in HN) there was a general feeling of annoyance when Apple tracked user locations with Apple Maps to identify traffic patterns, even though Google did the same for Google maps.
But now with Tesla user-tracking, people seem to be actively psyched at being tracked by Tesla.
I thought that Tesla have always been tracking their cars, autopilot or not. IIRC, there was a story from over a year ago where Tesla complained about a car review, citing the data that the reviewer's car had transmitted. This included location information, battery power, etc.
Perhaps there is a way to opt out of this - but also, if you opt out, do Tesla disable features of the car?
It never stops to amaze how software improvements can greatly expand the capability of a given hardware - all this is done on top of the 2014 autopilot hardware. That is, what I like about software - there seem to be very few hard limits you cannot work around with a clever new approach.
In the race to the self-driving car, Tesla now has one big advantage: they have tens of thousands of cars with the autopilot hardware driving around every day. This gives them a huge lead in the amount of data about their software performance - just comparing what the radar sees and how the human drives in any situation should make a difference.
As much as I love writing clever software to work around hardware limitations (I believe it's what makes videogame programming of the 80's and 90's fascinating and led to better creativity and better games), I once had the job of writing software for "broken" hardware. By this I mean I was in charge of a computer vision algorithm, and the camera was physically incapable of taking the images necessary for the algorithm to function. It's worth mentioning this was for a self-driving car.
Tesla will never come close to a stage 4 autonomous vehicle with the hardware rigs they're currently selling. That said, it'll be interesting to see what improvements they can make with software. Given their over-promise and under-deliver history though (which arguably killed someone), I'll take their marketing with a grain of salt.
I'd be interested to hear how the radar handles other radar signals. Given the use of police radar, radar detectors, and radar based collision avoidance like what's found in the rear tail-lights of some Ford F150's and of course are used on a fair number of Audi's it would seem the environment could get noisy at times. Yes, the Tesla radar could operate at a specific frequency that would minimize interference, but what happens when a bad actor decides to intentionally "blind" that radar signal? I assume given this is a life critical system that it would have countermeasures, perhaps utilizing a LIDAR or camera based backup?
I don't know how this particular radar system works, but in general you can modulate radar signals with a special encoding, so that only the sender can receive and interpret it. For all others the signal is juse noise. At university we worked with m-sequence based radar systems that have these properties. This would minimize the possibility that someone (accidently or not) can send signals that you misinterpret. Depending on the remaining possibility you might still want to take same countermeasures.
i think that question is not so pressing until the system is declared fully autonomous as in, no human required anymore to drive. as long as it's just an assistant, the human is supposed to brake. but, since they surely want autopilot to become an actual autopilot...
you could do the same with "normal" cameras (and probably lidar as well) i guess by pointing a laserpointer at it. the safest option is probably to just brake.
(as an aside, some years ago we had incidents in the news where people pointed laserpointers at aircraft pilots while landing. the pilot's appropriate response is usually to abort the landing and do a go-around, because that's the safest thing to do in this situation)
The fleet of Tesla cars on road is an advantage that Uber and Lyft have over Google. They can deploy cars with a lot of sensors on the road AND make money off of it for the most part!
If data is the differentiating factor in this game, Google has less of it! Which is interesting position for Google to be at!
It's quite strange to view how ... flexible the requirements-level changes are. As someone steeped in safety and human life critical software development, this seems very odd.
This is especially true of "Interface alerts are much more prominent, including flashing white border on instrument panel." This has been a huge thing in aviation automation for like ... forever.
I always assumed that humans evolved eyes adapted to the wavelengths of light because it gave optimal information to avoid collisions. (Well, except for glass panes; we're not genetically optimized for those.)
It feels scary to discard millions of years of evolution and go with radar-first, but as always, time will tell.
"This is where fleet learning comes in handy. Initially, the vehicle fleet will take no action except to note the position of road signs, bridges and other stationary objects, mapping the world according to radar."
Wow, glad to see that they are using big data and machine learning. If all those 400k orders go through, there will be a network effect in favor of Tesla.
Perhaps Tesla might release the data. Musk is all about improving humanity; selling cars is a means to an end, rather than the end itself, and that end could be hastened if every manufacturer could access (and augment?) the database.
...imagine this in racing. Saw some races over the weekend and the racing line varied considerably between cars and drivers, not like F1 where every driver is that good. I imagine the wider population have an even wider idea of what the 'racing line' is. In theory the Tesla car could take 'better than Senna' lines through ever curve, avoiding the crashes and also optimising efficient regenerative braking. I look forward to this and I am glad the Tesla brain is learning from 'the fleet'.
If I were to ever buy a Tesla, could I turn off data collection?
If not... that would turn off a lot of privacy-conscious people, which Tesla doesn't tend to attract at its current prices but may become relevant as they come out with cheaper cars.
Every time I am reminded about how Tesla can update their cars in the field I always imagine the stress in the responsibility of securing a network like that, and the risk a compromise carries. Heavy work for some team.
The "simple" radar cruise control in my 2014 Mazda 3 Astina (ex demo; MT) is an amazing experience. I'm finding it remarkably accurate, even picking up objects (motorcycles, bicycles) that they explicitly state will not be correctly registered. The simple, yet effective HUD indicates current following distance (as a side note, most drivers are way closer than 2s). Coupled with the visual AEB system, I have autonomous braking for traffic and emergencies. I still have to steer, however the lane departure system warns if I'm exiting my lane without indicating at +65km/h.
It might not be anything like Tesla autopilot, but it's still a pretty sweet taste of the future. So stands to reason more can be done with it; I wish I had the funds for a Tesla... Maybe one year :)
Slightly off topic, but are the software updates optional or mandatory in Tesla cars? As I do with phones, I hold off on updates until I know it's stable and has no major bugs. Serious bugs in a car software could be fatal.
> The radar was added to all Tesla vehicles in October 2014 as part of the Autopilot hardware suite, but was only meant to be a supplementary sensor to the primary camera and image processing system.
I guess I'm surprised that what sounds like a large change in ConOps can be rolled out as an upgrade across a fleet in such a short period of time. It'd be fascinating to hear what sort of V&V had to be done, and how it was accomplished so quickly, to make this happen.
I hope they take their time and not rush the update. This is probably the biggest change since introducing Autopilot. Also, Tesla is now in a very dangerous moment where a serious failure can bury Autopilot.
[+] [-] Animats|9 years ago|reply
This device isn't enough for a full point cloud. It doesn't scan in elevation, just azimuth. Some variants do have an upward-pointing beam in additional to the main forward beam, which is about 5 degrees in vertical.
There are automotive radars which scan in 3D[2], but Tesla's is not one of them.
Small radars are rather blunt instruments. You tend to get one point for each target, not lots of points. The beam focus isn't that tight. Tighter focus requires a larger antenna array.
[1] http://www.automotiveworld.com/news-releases/bosch-presents-... [2] http://www.fujitsu-ten.com/business/technicaljournal/pdf/38-...
[+] [-] omarforgotpwd|9 years ago|reply
Despite the hype, this blog post makes it clear that Autopilot is just a very simple camera based system that requires human monitoring (but is still very useful in my personal experience). Now they are releasing a software update that uses the previously unused radar hardware as well as machine learning to achieve better results. This is exciting because they're going to keep releasing software updates over the air and the software is just going to keep getting better and better.
The fact that the hardware is shitty doesn't concern me. Google's self driving car project depends on a Lidar system that costs $75,000 today. That's almost four times what the average car costs, and it doesn't work in bad weather. Regardless of how good Google's software is it could mean nothing if Tesla acheives a similar or better result using cheap commodity hardware. The hardware will only get better over time at the same price.
Tesla is in a better position to bring full autonomy to market than anyone else, since they control the hardware, the software, have cars in the field etc. For this reason I wouldn't be surprised if Tesla becomes the first company to break a market capitalization of $1 trillion dollars. Computers gaining the ability to move around the world with drones and autononomous driving will have an economic impact bigger than the introduction of the internet. Where we are now is meaningless. What matters is that Musk has stated the goal and we have something in the field today that can be updated iteratively over time until it's perfect (which will be never). We may not be very far along this journey, but we've taken those hardest first steps towards the next big technological revolution that will once again change everything about how humans live, the implications of which we can't even begin to imagine.
[+] [-] daenney|9 years ago|reply
[+] [-] Vik1ng|9 years ago|reply
https://www.engadget.com/2016/09/11/tesla-s-next-autopilot-u...
If Engadget got that right I think we will see a lot of upset Tesla owners in a few weeks.
[+] [-] Animats|9 years ago|reply
[+] [-] arcticfox|9 years ago|reply
Probably also a small handful fewer dead ones over the next few years.
[+] [-] darpa_escapee|9 years ago|reply
[+] [-] srwx|9 years ago|reply
[+] [-] monk_e_boy|9 years ago|reply
[+] [-] omarforgotpwd|9 years ago|reply
[+] [-] sytelus|9 years ago|reply
[+] [-] karyon|9 years ago|reply
edit: Upon closer reading, he explains it somewhat. Once they have enough data, the system will start braking on unknown objects with gradually increasing force as the confidence level rises. So basically, it will brake on unknown traffic signs but only slightly, as the confidence level shouldn't get too high, if I understand that correctly.
The last paragraph sounds technically challenging and interesting:
"Taking this one step further, a Tesla will also be able to bounce the radar signal under a vehicle in front - using the radar pulse signature and photon time of flight to distinguish the signal - and still brake even when trailing a car that is opaque to both vision and radar. The car in front might hit the UFO in dense fog, but the Tesla will not."
edit: it seems like they are already doing it beginning with this update: "Now controls for two cars ahead using radar echo, improving cut-out response and reaction time to otherwise-invisible heavy braking events". that sounds awesome.
[+] [-] hexane360|9 years ago|reply
One of the only situations where both "brake" and "break" have the same meaning in a sentence.
[+] [-] indolering|9 years ago|reply
[+] [-] MarcelGerber|9 years ago|reply
But for the learning part, they probably combine the camera that they used to date as their primary device in combination with the radar (at least in daylight scenarios) to identify objects. They may even be able to learn about the special material properties, like the reflective coating of a traffic sign.
[+] [-] T0T0R0|9 years ago|reply
[+] [-] NamTaf|9 years ago|reply
I wonder how many fatalities it'll take before they realise that 'the driver should've been paying attention' isn't a good enough excuse.
[+] [-] CaveTech|9 years ago|reply
blacklisting would cause the behaviour you describe.
[+] [-] robojamison|9 years ago|reply
(emphasis mine)
This has interesting privacy implications. I am not a Tesla owner, but I imagine that by enabling Autopilot you consent to providing Tesla with diagnostic, error, and sensor data. But what about those who have not enabled this feature? Their Tesla will automatically phone home with data regarding their location and surroundings regardless of whether or not they have consented to this?
[+] [-] ChuckMcM|9 years ago|reply
The interesting thing is the data set of watching humans drive and using models to drive for the same place. This only works if the "place" is not notably different from the model, say a semi has hit the overhead and its now hanging into the roadway, can the car distinguish between a sign hanging sideways and one that is attached normally?
Severe storms and down power lines is another interesting question. Does autopilot recognize the environment has been grossly modified and refuse to drive? Earthquakes, tornadoes, floods, all can grossly change the environment at a particular geocoded location.
What if a Tesla owner's club decides to use a piece of highway 58 out in Nevada as a race strip? Does autopilot assume that when you hit this point you are supposed to stomp the accelerator and go as fast as you can? (ok that is a stretch)
It's the data without the knowledge. Something machine learning is bad at (hence turning chat bots into vitriol spewing fascists). VERY interesting times.
[+] [-] archagon|9 years ago|reply
[+] [-] jeffwass|9 years ago|reply
But now with Tesla user-tracking, people seem to be actively psyched at being tracked by Tesla.
[+] [-] robbiep|9 years ago|reply
[+] [-] joosters|9 years ago|reply
Perhaps there is a way to opt out of this - but also, if you opt out, do Tesla disable features of the car?
[+] [-] shostack|9 years ago|reply
[+] [-] bwilliams18|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] _ph_|9 years ago|reply
In the race to the self-driving car, Tesla now has one big advantage: they have tens of thousands of cars with the autopilot hardware driving around every day. This gives them a huge lead in the amount of data about their software performance - just comparing what the radar sees and how the human drives in any situation should make a difference.
[+] [-] jayjay71|9 years ago|reply
Tesla will never come close to a stage 4 autonomous vehicle with the hardware rigs they're currently selling. That said, it'll be interesting to see what improvements they can make with software. Given their over-promise and under-deliver history though (which arguably killed someone), I'll take their marketing with a grain of salt.
[+] [-] patcheudor|9 years ago|reply
[+] [-] Matthias247|9 years ago|reply
[+] [-] Sinergy2|9 years ago|reply
[+] [-] karyon|9 years ago|reply
you could do the same with "normal" cameras (and probably lidar as well) i guess by pointing a laserpointer at it. the safest option is probably to just brake.
here elon musk says they don't use lidar: https://techcrunch.com/2016/09/11/tesla-autopilot-8-0-uses-r...
(as an aside, some years ago we had incidents in the news where people pointed laserpointers at aircraft pilots while landing. the pilot's appropriate response is usually to abort the landing and do a go-around, because that's the safest thing to do in this situation)
[+] [-] hengheng|9 years ago|reply
[+] [-] msoad|9 years ago|reply
If data is the differentiating factor in this game, Google has less of it! Which is interesting position for Google to be at!
[+] [-] ArkyBeagle|9 years ago|reply
This is especially true of "Interface alerts are much more prominent, including flashing white border on instrument panel." This has been a huge thing in aviation automation for like ... forever.
[+] [-] espadrine|9 years ago|reply
It feels scary to discard millions of years of evolution and go with radar-first, but as always, time will tell.
[+] [-] henryw|9 years ago|reply
Wow, glad to see that they are using big data and machine learning. If all those 400k orders go through, there will be a network effect in favor of Tesla.
[+] [-] Smaug123|9 years ago|reply
[+] [-] pcl|9 years ago|reply
> With further data gathering, car will activate Autosteer to avoid collision when probability ~100%
> Curve speed adaptation now uses fleet-learned roadway curvature
[+] [-] Theodores|9 years ago|reply
[+] [-] Fej|9 years ago|reply
If not... that would turn off a lot of privacy-conscious people, which Tesla doesn't tend to attract at its current prices but may become relevant as they come out with cheaper cars.
[+] [-] serf|9 years ago|reply
[+] [-] sundvor|9 years ago|reply
It might not be anything like Tesla autopilot, but it's still a pretty sweet taste of the future. So stands to reason more can be done with it; I wish I had the funds for a Tesla... Maybe one year :)
[+] [-] rajathagasthya|9 years ago|reply
[+] [-] mrfusion|9 years ago|reply
[+] [-] gambiting|9 years ago|reply
[+] [-] jacobevelyn|9 years ago|reply
combined with
> ...we now believe [radar] can be used as a primary control sensor without requiring the camera to confirm visual image recognition.
seems like they're now intentionally ignoring the possibility of wooden and painted plastic obstacles?
[+] [-] etendue|9 years ago|reply
I guess I'm surprised that what sounds like a large change in ConOps can be rolled out as an upgrade across a fleet in such a short period of time. It'd be fascinating to hear what sort of V&V had to be done, and how it was accomplished so quickly, to make this happen.
[+] [-] amq|9 years ago|reply