There's a similar situation with Go where some positions utterly confuse bots trained on playing mostly normal games. There was this interesting research blog post on training a bot specifically to become good at solving one of these weird problems (Igo Hatsuyoron 120, the "hardest go problem ever")https://blog.janestreet.com/deep-learning-the-hardest-go-pro...
kadoban|4 years ago
So katago being unable to handle it without special training doesn't seem _quite_ as blind of a blindspot as the chess examples from the article seem to be (I suck at chess and I was able to understand one or two).
I'm not trying to undermine you mentioning this, in case it comes off like that, on the contrary I think the comparison is quite interesting. I'm curious if this is just a difference in go vs chess, or in the relative abilities of specific kinds of AIs to handle these, or maybe just differences in human ability to craft and/or understand different problem difficulties between the games.