top | item 39972990

After AI beat them, professional Go players got better and more creative

428 points| iNic | 1 year ago |henrikkarlsson.xyz | reply

227 comments

order
[+] dtnewman|1 year ago|reply
Just look to Chess. The top players today are way better than any of the greats before, because they can train against computers and know exactly where they failed. That said, because they've gotten so good, chess at the top levels is pretty boring... it's hard to come up with a unique strategy so players tend to be defensive. Lots of ties.

On the other hand, chess is more popular than ever. It's huge in high schools. I see people playing it everywhere. I know that for me, I love being able to play a game and then view the computer analysis afterwards and see exactly what I did wrong (granted, sometimes a move can be good for a computer who will know how to follow through on the next 10 moves, but not necessarily good for me... but most of the time I can see where I made a mistake when the computer points it out).

Side note: I play on LIChess and it's great. Is there an equivalent app for Go?

[+] hibikir|1 year ago|reply
The defensiveness has absolutely nothing to do with better computers and the improvements in play that came with it, but with tournaments where risk taking is an economic disaster. As others have said, there aren't massive numbers of ties in the candidates tournament, because the difference in value between being first and second is so massive that if you aren't first, you are last.

Compare this to regular high level chess in the Grand Chess Tour: It's where most of your money is going to come from if you are a top player. Invitation to the tour as a regular is by rating, and there's enough money at the bottom of the tour than the difference between qualifying or not is massive. Therefore, the most important thing is to stay in the tour train. Lose 20 points of rating, and barring Rex Sinquefield deciding to sponsor your life out of the goodness of his heart, you might as well spend time coaching, because there are so few tournaments where there's a lot of money.

This also shows in the big difficulties for youngsters that reach 2650 or so: They are only going to find good enough opponents to move up quickly in a handful of events a year where people with higher rating end up risking their rating against them. See how something like the US championship is a big risk for the top US professionals, because all the young players that show are at least 50 points underrated, if not more.

This is what causes draws, not computer prep. Anand was better at just drawing every game in every tournament back when he was still on the tour, and yet computers were far worse than today, especially with opening theory.

[+] fryz|1 year ago|reply
FWIW, I find the classical chess tournaments with the super GMs to be fairly interesting, if only because the focus of the games is more about the metagame than about the game itself.

The article linked at the bottom of the source is a WSJ piece about how Magnus beats the best players because of the "human element".

A lot about the games today are about opening preparation, where the goal is to out-prepare and surprise your opponent by studying opening lines and esoteric responses (somewhere computer play has drastically opened up new fields). Similarly, during the middle/end-games, the best players will try to force uncomfortable decisions on their opponents, knowing what positions their opponents tend to not prefer. For example, in the candidates game round 1, Fabiano took Hikari into a position that had very little in the way of aggressive counter-play, effectively taking away a big advantage that Hikaru would otherwise have had.

Watching these games feels somewhat akin to watching generals develop strategies trying to out maneuver their counterparts on the other side, taking into consideration their strengths and weaknesses as much as the tactics/deployment of troops/etc.

[+] Taek|1 year ago|reply
I think you would see fewer ties if players got 0.2 points each for draws instead of 0.5 points each for draws.

It makes the risk of going for a risky strategy lower (you only drop 0.2 pts instead of 0.5 vs getting an easy draw) and it makes the rewards much greater... a single win and 4 losses scores the same as 5 draws.

you wont see players doing intentional draws anymore either

[+] zer0-c00l|1 year ago|reply
https://online-go.com/ is the easiest place to get started as a western beginner. The far more active go servers are Asian and have a higher barrier to entry in terms of registration, downloading the client, and dealing with poor localization. (Fox Weiqi, Tygem, etc.)
[+] thatswrong0|1 year ago|reply
I wish Chess960 was more popular for this exact reason. It’s super fun to watch and play compared to normal Chess… basically all I do with my friends
[+] Buttons840|1 year ago|reply
One nice thing about Go is there are no ties. This is offset by how boring the end games are though and having to count. Chess has explosive and exciting endings, Go just kind of fizzles out at some point.
[+] timetraveller26|1 year ago|reply
Don't know about go, but Lishogi is Lichess for shogi (Japanese chess)
[+] yeellow|1 year ago|reply
I recommend goQuest (mobile app), and playing 9x9 go. I used to play on KGS, but it is less crowded now (the problem is that there are too many servers: OGS, IGS, Tygem, Wbadul, etc and no one dominates, therefore you wait for the game, you need a rating, etc. Most are not very modern, mobile unfriendly, etc.). Also 19x19 takes too much time for me when comparing to chess, 9x9 is perfect, and goQuest has many active players, after a few seconds you get a match (they offer 13x13 and 19x19, but those are less active I suppose).
[+] jsheard|1 year ago|reply
> Just look to Chess. The top players today are way better than any of the greats before, because they can train against computers and know exactly where they failed.

AlphaGo isn't available for anyone to train against like Stockfish is though, what are Go players using? Has another powerful Go engine been developed since then?

[+] Angostura|1 year ago|reply
> The Queens Gambit turned quite a few of my daughter's friends on to chess
[+] veunes|1 year ago|reply
The ability to train against powerful computer programs have indeed elevated the level of play in many games
[+] tptacek|1 year ago|reply
online-go.com
[+] Pet_Ant|1 year ago|reply
> That said, because they've gotten so good, chess at the top levels is pretty boring

Yeah, I feel the same thing about Magic formats when the pros play. When a format is new and people are discovering, and they have to rely on their gut, and make educated guesses. That's when it's fun to play and watch.

[+] bobogei81123|1 year ago|reply
Back when I was a kid learning go, I was taught that the kick joseki (a standard sequence of moves, similar to chess opening) [1] is a bad move, and you were considered trolling (and the teacher would not be pleased) if you played a 3-3 invasion [2] during the opening phase. These are all vindicated thanks to the AI and played pretty commonly nowadays. AI definitely helped eliminate many dogma and myths in go.

[1] https://senseis.xmp.net/?44PointLowApproach#toc6

[2] https://senseis.xmp.net/?33PointInvasion#toc2

[+] akira2501|1 year ago|reply
The other possibility is that it destroyed the incidental dogma that tends to build up in these types of games and human activities. This is why I like the "hacker ethos" as much as I do, it tends to eschew things like "accepted" dogma in order to find additional performance that other people were just leaving on the table out of polite comfort.
[+] lordnacho|1 year ago|reply
This is the tip of the iceberg, right? It's foreshadowing AI helping experts become better. I can see it happening in a lot of creative fields, including software. Perhaps this is where it really pulls the experts from the juniors, because only experts will be able to judge whether the AI has helped him create something actually good.
[+] tptacek|1 year ago|reply
When you read Go strategy resources, you see a lot of things divided into what best practices were before AlphaGo and what they are now. It's a whole big thing.

It is still the case, though, that AI dominates humans at Go; humans didn't get so creative about the game that they put AI back on its toes (though some did discover exploitable AI "strategy bugs").

[+] nicklecompte|1 year ago|reply
The "strategy bugs" are a symptom of a more general shortcoming and why 2024 AI is still basically dumber than a mouse.

Keel in mind that if you had a variation of Go where there was a "hole" in the middle of the board, both Lee Sedol and a competent amateur would be able to play competent "Doughnut Go" without any prior experience. But AlphaGo and its successors would certainly make a ton of dumb unforced errors unless it practiced at least a few hundred games. (I am basing this observation on similar experiments with a similar Breakout AI, not sure if these experiments have been done with Go.)

Mammals, including humans, have advanced brains because we evolved to solve weird and unexpected problems with moderate reliability, not to optimize well-known benchmarks with high reliability. (This is also why plants are green instead of black.) By contrast, AlphaGo is a machine designed to solve a highly specific problem. The whole point of machines is that they dominate humans at specific tasks, otherwise we would just use a human. But we don't describe bulldozers as "superhuman" unless we're being intentionally obscure; the same should apply to AI. Otherwise we risk assuming the AI is capable of things it probably can't do without retraining.

[+] pa7ch|1 year ago|reply
Agreed, but I still think humans should get a little more credit for winning against AI no matter how. Its a competitive game with very simple and clear rules. A hole in AI strategy is a hole, even if quickly patched!

I am still so impressed that Lee Sedol beat Alpha Go 1 game out of 5 way back when AI made its breakout. I was sad he felt so sheepish afterward for losing. In hindsight, I think it was an amazing accomplishment even if today an AI could beat Shin Jin-Seo (#1 player) 100 out of 100 times!

[+] paulcole|1 year ago|reply
This is true in Scrabble as well.

When I was playing seriously there were strong players who played a ton over a board and had deep intuition about what made plays good and what made plays bad. In the late 1990s/ early 2000s there started to be a lot more in the way of computer simulation and analysis and some very strong computer players.

One (general) example was that older players liked the idea of making longer plays using more tiles to "win" a race to the S and blank tiles (the best tiles in the bag). Computer simulations generally show that turnover (as this is called) isn't optimal and you're better off holding strong combinations of letters rather than playing them off hoping to draw something better.

Now younger players are better than ever because all of their training came with the help of computer analysis and simulation.

Of course in Scrabble a huge part of it comes down to just memorizing the words in the dictionary.

[+] ummonk|1 year ago|reply
The article is misleading regarding the history of chess. Magnus excepted, most top players did adopt a more cold and calculating material-focused chess style that mimicked Deep Blue and subsequent chess computers. It was only with the success of AlphaGo and LC0 that top chess players have started playing a more creative playstyle again, playing various wing pawn advances, as well as being more willing to give up material for nebulous initiative or positional advantages.
[+] kccqzy|1 year ago|reply
> Shin et al calculate about 40 percent of the improvement came from moves that could have been memorized by studying the AI. But moves that deviated from what the AI would do also improved, and these “human moves” accounted for 60 percent of the improvement.

I don't often play Go myself but a number of my friends do. Among non-professional players, it is really common to see game play being not as exciting as before because there's now an easy way: just memorize and copy what the AI does. I don't doubt that professional players still have a ton of creativity, but a lot of non-pros don't really have too much creativity and the whole game becomes memorizing and replicating AI moves.

[+] csa|1 year ago|reply
> Among non-professional players, it is really common to see game play being not as exciting as before because there's now an easy way: just memorize and copy what the AI does

This is just… not true.

Unless one is playing at high dan ranks, it’s trivially easy to induce a “memorized sequence” that your opponent either will not have memorized or will leave them with a situation that they don’t understand well enough to capitalize on.

The “slack moves” in the openings that pros talk about are often worth 1.5 points or less (often a fraction of a point), and that assumes pro-level follow up.

This pro-level follow up is laughably rare outside of strong amateur dan levels and pro levels (and even within those ranks there are substantial differences).

[+] anononaut|1 year ago|reply
Before that, weak amateurs were just replicating human joseki. That's nothing new. They definitely give a player a good start, but knowing which to use and when, and of course how to follow up until the game is over is no simple task. It also happens to be the case that AlphaGo, KataGo etc. prefer simplifying the board state. Remove complexity and win only by a thin margin, because that's all that's needed. Memorizing AI preferences is much easier than some of these highly complicated joseki.
[+] thomasahle|1 year ago|reply
> a lot of non-pros don't really have too much creativity and the whole game becomes memorizing and replicating AI moves.

That makes no sense. After 10-20 moves you are surely in a position that has never been played before. How do you memorize moves after that?

[+] matthest|1 year ago|reply
Entertainment is one industry that will survive post-AI.

We're still going to want to watch humans play sports, music and video games. We're going to want to watch humans act, cook food, and make vlogs.

The chess industry is growing rapidly, even though it has already been conquered by AI: https://www.einpresswire.com/article/649379223/chess-market-...

[+] baobabKoodaa|1 year ago|reply
I really enjoyed the upbeat positive outlook of the article.

Unfortunately, as an ex poker pro, I find it hard to imagine that AI "lifts people up" in domains like games. Sandholm's bots pretty much destroyed poker.

[+] sinuhe69|1 year ago|reply
Poker is all about “taking the emotion out of the game”, isn’t? In such cases, what can beat a machine? Doesn't a machine naturally have the best “poker face”?
[+] intuitionist|1 year ago|reply
The blog doesn’t say anything about how this “decision quality” metric is calculated… but presumably it’s using very similar Go evaluation functions to the ones used in the superhuman AI players, right? I think it’s highly unsurprising that humans would improve by that metric — they’re learning from the machine, so of course the machine likes it.

Also, most things in life are not two-player zero-sum games where you can construct an evaluation function and build a “decision quality” metric out of it. So I’m not sure what the takeaway should be in those cases.

[+] timetraveller26|1 year ago|reply
I watched the alpha go doc and it was really shocking to me when one of the top go players decided to retire because the game was meaningless now that computers could beat anybody.

it's good seeing that that wasn't the case for all players.

[+] bravura|1 year ago|reply
1) I would really be interested in broad brush strokes to understand how go theory has expanded.

2) I really wish we could shake the ant-farm with chess and go Fischer random chess. There's something nice about not having to memorize openings.

[+] bongodongobob|1 year ago|reply
At the same time, the familiarity of openings is nice.

Imagine completely random WoW battlegrounds. Part of the fun is knowing the territory and strategies rather than having to make them up from scratch each game.

[+] ordu|1 year ago|reply
Michael Abrash in his Graphics Programming Black Book described something similar with regard to optimization. People become stuck at some point, when they confuse "it is good enough" with "it is the best possible result". But if some event made them seriously doubt that it is the best possible result they could do wonders, like going from "this is the fastest code possible" to making it 10x faster.

Just knowing that you could do better is a big deal, but if you have an AI showing you how to do better, then further perfection will become inevitable.

[+] idkdotcom|1 year ago|reply
Go is a finite search game. So each chess.

Equating intelligence to being good as these games is as silly as equating intelligence to being good at solving differential equations. Computers have bested humans at solving differential equations for many decades now. Nobody said "gee humans are now stupid".

AI, as a knowledge field, is biased in this notion that all it matters when it comes to intelligence is that computers beat humans at Go or Chess.

[+] Mtinie|1 year ago|reply
This supports with my hypothesis about human-created art, post-AI.

People are deeply concerned about how their livelihoods and identities will survive the next few years. I get it, and while there’s certainly a level of existential dread that feels reasonable, I don’t see many people yet discussing what the visual arts industries will look like on the other side.

If Go play is in any way a creative exercise—which I’ve heard it is— then I’m super interested to see the state of humans in the arts 24 months out from now.

[+] jsheard|1 year ago|reply
There is a key difference in the way these models are trained - Chess and Go have clearly defined win conditions, so a model can be taught to explore the possibility space and try to reach victory by any means necessary, potentially with strategies which have never been seen before. With art on the other hand there is no objective measure of quality, so the models are instead taught to treat already existing art as the benchmark to strive towards, making them trite by nature.

As I see it AI can absolutely find innovative solutions, but only if you can clearly and explicitly define the problem it needs to solve.

[+] smokel|1 year ago|reply
Most of contemporary art is unaffected by the current AI craze.

On one hand, the art world has been steadily pushing boundaries since the 19th century, and computer technology is just one blip on the vast radar of interesting subjects (other fashionable ones being gender, colonialist history, social practices, and physical properties of paint).

On the other hand, art is mostly created by artists who were professionally trained as artists, i.e. not as scientists. Knowledge about computer technology is typically rather limited with both artists and collectors, leading to fairly bland stuff, or properly misguided hypes such as NFTs.

[+] mark_l_watson|1 year ago|reply
Seven years ago I took remote Go playing lessons from a South Korean professional player. I stopped after about 5 months and started using CS Pro Go on my iPad Pro and it has a nice teaching feature of rating every one of my moves so after a game I can see where my biggest mistakes were. This is different than pro players learning new surprising strategies, for me it is nice to use.
[+] zerocrates|1 year ago|reply
Just finally getting around to reading/finishing my copy of Seven Games by Oliver Roeder, which covers checkers, chess, go, backgammon, poker, Scrabble and bridge, and the efforts for computers winning/solving each.

A common theme is the effects of the computers on the human players in elevating (but maybe also homogenizing) play.

[+] 1-6|1 year ago|reply
I bet modern day go players have become more stereotypical in their moves. The only parallel I can draw is from professional Starcraft players who stopped doing very exotic moves because it’s usually blocked by players who’ve seen them all.