(no title)
vikasnair | 4 years ago
Tons of potholes like this exist in AI we use everyday. We used to be ML engineers at Siri and had to invest millions into monitoring tools to stay on top. This is fine and all, but what’s better is to catch them before you ship and before your users suffer (sometimes literally, as in this case).
We think that better tools for QA-ing models, which allow more people (not just ML engineers) to get eyes on the model, might help catch mistakes proactively rather than retroactively.
pkz|4 years ago
giardini|4 years ago
Maybe "potholes" is not the correct analogy. Maybe AI is going down the wrong road.