top | item 40860984

(no title)

mvanveen | 1 year ago

I am a co-author of a patent for model explainability for credit risk underwriting applications using Shapley values.

In fairness I haven't given this article a thorough read but my initial impression is that I'm finding myself frustrated by the FUD this article is attempting to spread. As my boss would often remark to remind us all: model explainability is an under-constrained optimization problem. By definition there isn't a unique explanation decomposition unless you further constrain the problem.

Therefore, I personally find that hand-wringing around there not being 100% agreement around different explanations for a model inference, while definitely thought provoking and worth considering, should at least account for this reality. For some reason a lot of folks in the ML community seem to have come the opinion that because the problem is under-constrained that means that explanations shouldn't be calculated or have no utility.

Would you prefer a model that examines which features are driving the model to deny a disproportionate number of folks of a particular race or ethnicity or not, all things being equal? My point is even if there are limitations to explainability I think there are a lot of very real, critical scenarios where applying SHAP can be of actual, real world utility.

Furthermore, it's not clear that LIME or other explainability methods will provide better or more robust explanations than Shapley values. As someone that has looked at this pretty extensively in credit underwriting I'd personally feel most comfortable computing SHAP values while acknowledging some of the limitations and risks this article calls out.

Axioms such as completeness are also pretty reasonable and I think there is a fair amount of real world utility to explainability algorithms that derive from such an axiomatic basis.

discuss

order

No comments yet.