(no title)
indy7500 | 7 years ago
The model actually gives us a confidence for each possibility. It might say, N-90%, K-10%, B/R/Q-0% for one of the boxes. Here, we look at how confident our character recognizer is on each character. If the PGN is invalid, then we know some character was recognized incorrectly. We look at the low-confidence characters and change them to the next highest confidence prediction, checking to see which combination of changes delivers a valid PGN. For example, consider the string of moves 1. d4 d2. Black can't play d2 on move 1! But we look at the next most likely predictions, perhaps d8 for white and d5 for black. These are the combinations:
d4 d2
d4 d5
d8 d2
d8 d5
Only the second one is valid, so we choose it and continue ahead.
If you literally wrote "d4 d2," then it's a lot less likely that the correct digit instead of 2 will be in the top 3 predictions.
theli0nheart|7 years ago
Anyways, this is really cool, and smart idea. I'll be printing these out and bringing them to my next tourney. Excited to see how it goes! How do I send you direct feedback?
indy7500|7 years ago
P.S. Did you have something in mind for improving the postprocessing / trying out different iterations? Or was it just a challenge :D