top | item 40861115

(no title)

mvanveen | 1 year ago

In this case let's assume that the weights and biases of, say, a neural network are fixed and the model is already trained.

One way of thinking about explainability is that it deals with determining for some input data how much each feature is contributing to the final outcome (e.g. variable 1 and 2 contributed x% and y% to the final inferred value).

You're correct to suggest that when you backpropogate residual error there are also non-linear interactions between features and that will affect how much each variable is contributing to the updated weight value (in fact that's kind of the point of a deep network ;).

discuss

order

No comments yet.