top | item 27964483

(no title)

sd8f9iu | 4 years ago

Your experience with engines then is very different from my own and that of the wider chess community. The general consensus is that playing computers at any strength is both counterproductive and unfulfilling. They do indeed make random blunders if you lower their search depth. The engine will play brilliant moves for any tactic that falls within the search depth, but fail miserably for tactics that fall one ply outside of it. Humans function very differently to computers: they evaluate a fraction of the positions, but use a much more sophisticated evaluation function. There is no way of emulating human weakness with such a vastly different style of computation.

Here is a game I just played that illustrates this phenomenon:

https://lichess.org/PiIFqI2c/black

The engine plays well before making a series of random blunders no human would make. You can try this yourself by playing the different Stockfish levels on lichess.org. You will be unable to find a level that makes for an enjoyable game. There are some new engines that use neural networks to try and play similarly to how humans do [1]. I can't comment on their success, but their lack of wide adoption by the chess community signals to me that it is still incomparable to humans.

[1] https://maiachess.com/

discuss

order

thom|4 years ago

I was answering the “they can’t be artificially weakened” point. I’m not saying they play like humans.

sd8f9iu|4 years ago

Ok, the implication in my comment was that they can't be weakened like humans, not that they cannot be weakened at all. The phenomenon of random mistakes does not need to be programmed in the engine, it is a byproduct of lowering the search depth or eval function.