top | item 23388955

(no title)

Plyphon_ | 5 years ago

Do you have a link to that tank/weather study? I've never heard of it, sounds really interesting!

discuss

order

YeGoblynQueenne|5 years ago

It may or may not have happened with tanks; it sure happened with horses:

To understand how their AI reached decisions, Müller and his team developed an inspection program known as Layerwise Relevance Propagation, or LRP. It can take an AI’s decision and work backwards through the program’s neural network to reveal how a decision was made.

In a simple test, Müller’s team used LRP to work out how two top-performing AIs recognised horses in a vast library of images used by computer vision scientists. While one AI focused rightly on the animal’s features, the other based its decision wholly on a bunch of pixels at the bottom left corner of each horse image. The pixels turned out to contain a copyright tag for the horse pictures. The AI worked perfectly for entirely spurious reasons. “This is why opening the black box is important,” says Müller. “We have to make sure we get the right answers for the right reasons.”

https://www.theguardian.com/science/2017/nov/05/computer-say...

There is, in general, a great deal of work on explaining the decisions of neural net. Explainable AI is a thing, with much funding and research activity and there's books and papers etc, e.g. https://link.springer.com/book/10.1007/978-3-030-28954-6.

And all this is becaue, quite regardless of whether that tank story is real or not, figuring out what a neural network has actually learned is very, very difficult.

One might even say that it is completely, er, irrelevant, whether the tank story really happened or not, because it certainly captures the reality of working with neural networks very precisely.

mkl|5 years ago

It doesn't seem to have happened. Gwern has done some extensive research: https://www.gwern.net/Tanks

YeGoblynQueenne|5 years ago

As I say above, even if the tank story is apocryphal, it captures the tendency of neural nets (modern or ancient, doesn't matter) to overfit to irrelevant details (which, btw, is what Layerwise Relevance Propagation from my comment above is trying to determine).

This is probably the reason why this story has been repeated so many times (and with so many variations): beause it rings true to anyone who has ever trained a neural net, or interacted with a neural net for any significant amount of time. Unfortunately, the article you cite chooses to suggest otherwise.

In any case, if the tank story is an urban legend it has its roots firmly in reality.