top | item 27190104

(no title)

rudi-c | 4 years ago

There's a similar situation with Go where some positions utterly confuse bots trained on playing mostly normal games. There was this interesting research blog post on training a bot specifically to become good at solving one of these weird problems (Igo Hatsuyoron 120, the "hardest go problem ever")

https://blog.janestreet.com/deep-learning-the-hardest-go-pro...

discuss

order

kadoban|4 years ago

It is worth noting that that problem is only understood by humans (as well as it is) after _centuries_ of study by many professional players. My understanding is that even very strong human players must study that problem for days/months/years to really understand it well.

So katago being unable to handle it without special training doesn't seem _quite_ as blind of a blindspot as the chess examples from the article seem to be (I suck at chess and I was able to understand one or two).

I'm not trying to undermine you mentioning this, in case it comes off like that, on the contrary I think the comparison is quite interesting. I'm curious if this is just a difference in go vs chess, or in the relative abilities of specific kinds of AIs to handle these, or maybe just differences in human ability to craft and/or understand different problem difficulties between the games.