top | item 21958004 (no title) RSchaeffer | 6 years ago But how did their model compare against others? The article only mentions how their interpretable model compared against their own ML attempts discuss order hn newest jph00|6 years ago Their model didn't win. IBM's model won, based on actual metrics around useful insights. Mathnerd314|6 years ago The IBM team got $5,000 and the second place/honorable mention NYU got $2,000. So going by prize amounts, the Duke model was still pretty good.IBM turned the model/paper into a toolkit: https://www.ibm.com/blogs/research/2019/08/ai-explainability... Their model seems to be a variant of decision trees that has a knob controlling how complicated the trees are.And the evaluation was completely subjective, so there's not any meaning to the Duke people losing besides that the judges didn't like them. load replies (2)
jph00|6 years ago Their model didn't win. IBM's model won, based on actual metrics around useful insights. Mathnerd314|6 years ago The IBM team got $5,000 and the second place/honorable mention NYU got $2,000. So going by prize amounts, the Duke model was still pretty good.IBM turned the model/paper into a toolkit: https://www.ibm.com/blogs/research/2019/08/ai-explainability... Their model seems to be a variant of decision trees that has a knob controlling how complicated the trees are.And the evaluation was completely subjective, so there's not any meaning to the Duke people losing besides that the judges didn't like them. load replies (2)
Mathnerd314|6 years ago The IBM team got $5,000 and the second place/honorable mention NYU got $2,000. So going by prize amounts, the Duke model was still pretty good.IBM turned the model/paper into a toolkit: https://www.ibm.com/blogs/research/2019/08/ai-explainability... Their model seems to be a variant of decision trees that has a knob controlling how complicated the trees are.And the evaluation was completely subjective, so there's not any meaning to the Duke people losing besides that the judges didn't like them. load replies (2)
jph00|6 years ago
Mathnerd314|6 years ago
IBM turned the model/paper into a toolkit: https://www.ibm.com/blogs/research/2019/08/ai-explainability... Their model seems to be a variant of decision trees that has a knob controlling how complicated the trees are.
And the evaluation was completely subjective, so there's not any meaning to the Duke people losing besides that the judges didn't like them.