(no title)
mvanveen | 1 year ago
One way of thinking about explainability is that it deals with determining for some input data how much each feature is contributing to the final outcome (e.g. variable 1 and 2 contributed x% and y% to the final inferred value).
You're correct to suggest that when you backpropogate residual error there are also non-linear interactions between features and that will affect how much each variable is contributing to the updated weight value (in fact that's kind of the point of a deep network ;).
No comments yet.