top | item 21202183

(no title)

drabiega | 6 years ago

That's a fair point which also occurred to me while reading the article. It is, I think, indicative of a deeper issue with using ML in these sorts of safety contexts. If the only way to really test your safety system is to actually put people in danger your whole concept may be problematic.

discuss

order

m0zg|6 years ago

That's why Tesla's approach is pretty brilliant IMO. It's easy to collect samples where there was hard braking and there was a real, actual human visible in the path of the car while the car is under human control. No dummies are needed, and AI was not in control of the car, so there's no ethics issue either. Your Tesla will upload such samples automatically if Tesla deep learning system wants them.