(no title)
pkage
|
1 year ago
LIME and other post-hoc explanatory techniques (deepshap, etc.) only give an explanation for a singular inference, but aren't helpful for the model as a whole. In other words, you can make a reasonable guess as to why a specific prediction was made but you have no idea how the model will behave in the general case, even on similar inputs.
Narhem|1 year ago
It’s a disconnect between finding a real life “AI” and trying to find something which works and you can have a form of trust with.
solidninja|1 year ago