(no title)
ekjhgkejhgk | 8 hours ago
I've always thought that the idea that decision trees are "explainable" is very overstated. The moment that you go past a couple of levels in depth, it becomes an un-interpretable jungle. I've actually done the exercise of inspecting how a 15-depth decision trees makes decision, and I found it impossible to interpret anything.
In a neural network you can also follow the successive matrix multiplications and relu etc through the layers, but you end up not knowing how the decision is made.
Thoughts?
lokimedes|7 hours ago
My second job after physics was AI for defense, and boy is the dream of explainable AI alive there.
Honesty anyone who “needs” AI to be understandable by dissection, suffers from control issues :)