That's a fair point which also occurred to me while reading the article. It is, I think, indicative of a deeper issue with using ML in these sorts of safety contexts. If the only way to really test your safety system is to actually put people in danger your whole concept may be problematic.
m0zg|6 years ago