top | item 29711565

(no title)

vikasnair | 4 years ago

This is exactly what we’re trying to solve at Unbox (S21).

Tons of potholes like this exist in AI we use everyday. We used to be ML engineers at Siri and had to invest millions into monitoring tools to stay on top. This is fine and all, but what’s better is to catch them before you ship and before your users suffer (sometimes literally, as in this case).

We think that better tools for QA-ing models, which allow more people (not just ML engineers) to get eyes on the model, might help catch mistakes proactively rather than retroactively.

discuss

order

pkz|4 years ago

This made me happy. The state of language models paired with the overoptimistic ideas about AI a lot of people have sets the scene for a number of train wrecks. I hope more people critically evaluate their models before releasing them in mass scale.

giardini|4 years ago

vikasnair says:>"Tons of potholes like this exist in AI we use everyday..."<

Maybe "potholes" is not the correct analogy. Maybe AI is going down the wrong road.