top | item 35200267

ChatGPT's Chess Elo is 1400

212 points| dbrereton | 3 years ago |dkb.blog

345 comments

order

Some comments were deferred for faster rendering.

k2052|3 years ago

This is so easy to disprove it makes it look like the author didn't even try.

Here is the convo I just had:

me: You are a chess grandmaster playing as black and your goal is to win in as few moves as possible. I will give you the move sequence, and you will return your next move. No explanation needed

ChatGPT: Sure, I'd be happy to help! Please provide the move sequence and I'll give you my response.

me: 1. e3

ChatGPT: 1... e5

me: 2. ne2

ChatGPT: 2... d6

me: 3. g4

ChatGPT: 3... h5

me: 4. g5

ChatGPT: 4... Ne4

Ne4 is illegal. Also you'd be hard pressed to find any human player that would miss the hanging pawns much less a 1400 elo player. GPT has been trained on chess games and is predicting patterns, this has been known since the GPT-2 days. The daily hype articles where someone coerces a statistical model into "demonstrating" an emergent capability sure are getting old. Claiming emergent capabilities (like having learned chess) are an extraordinary claim (LLMs most likely cant learn things) and I really wish people would put more effort when they make these claims. But the AI hype problem is as old as AI research itself...

vidarh|3 years ago

You're "disproving" the article by doing things differently to how the article did. If you're going to disprove that the method given in the article does as well as the article claims at least use the same method.

whymauri|3 years ago

I was able to get better performance by always providing the prior sequence of moves and forcing ChatGPT to also respond with the sequence of moves up until its move.

Edit: I told the model that if the sequence was wrong or illegal, it forfeits the game. Without doing this, GPT would argue with me that it won and I didn't know the rules (serious).

Spivak|3 years ago

From the article.

> Occasionally it does make an illegal move, but I decided to interpret that as ChatGPT flipping the table and saying “this game is impossible, I literally cannot conceive of how to win without breaking the rules of chess.” So whenever it wanted to make an illegal move, it resigned.

But you can do even better than the OP with a few tweaks.

1. One is by taking the most common legal move from a sample of responses.

2. Telling GPT what all the current legal moves are telling it to only respond with an element from the list.

3. Ending the prompt with the current sequence of moves and having it complete from there.

theptip|3 years ago

I don’t think this suffices as disproving the hypothesis. It’s possible to play at 1400 and make some idiotic moves in some cases. You really need to simulate a wide variety of games to find out, and that is what the OP did more of. Though I do agree it’s suggestive that your first (educated) try at an edge case seems to have found an error.

This is broadly the “AI makes dumb mistakes” problem; while being super-human in some dimensions, they make mistakes that are incredibly obvious to a human. This comes up a lot with self-driving cars too.

Just because they make a mistake that would be “idiots only” for humans, doesn’t mean they are at that level, because they are not human.

SamBam|3 years ago

Honestly, I made it make an illegal move in my very first game, in the third move. You just have to do stuff no normal player would do:

> You are a chess grandmaster playing as black and your goal is to win in as few moves as possible. I will give you the move sequence, and you will return your next move. No explanation needed.

1. b4 d5 2. b5 a6 3. b6

> bxc6

That's obviously illegal.

... to all those who are saying "well even good players can make illegal moves sometimes," that's just ridiculous. No player makes illegal moves that often.

haburka|3 years ago

He does have a section about GPT 4 which does match your results. Not sure if he added it after your comment or if you accidentally missed it.

Tenoke|3 years ago

I played a game against it yesterday (it won) and the only time it made an ilegal was move 15 (the game was unique according to lichess database from much earlier) so I just asked it to try again. There's variance in what you get but your example seems much worse.

good_boy|3 years ago

It should be possible to attach 'accelerators' or 'special skills'. So that when playing with ChatGPT you will be actually facing GNU Chess.

latexr|3 years ago

> These people used bad prompts and came to the conclusion that ChatGPT can’t play a legal chess game. (…)

> With this prompt ChatGPT almost always plays fully legal games.

> Occasionally it does make an illegal move, but I decided to interpret that as ChatGPT flipping the table (…)

> (…) with GPT4 (…) in the two games I attempted, it made numerous illegal moves.

So you’ve ostensibly¹ found a way to reduce the error rate and then deliberately ignored the cases where it failed. In short: it may play valid chess under certain conditions but can’t be trusted to do so. That doesn’t contradict previous findings.

¹ 19 games is a small sample and the supposedly more advanced system failed in your tries.

vidarh|3 years ago

Fuller context from the article:

> Occasionally it does make an illegal move, but I decided to interpret that as ChatGPT flipping the table and saying “this game is impossible, I literally cannot conceive of how to win without breaking the rules of chess.” So whenever it wanted to make an illegal move, it resigned.

(my emphasis)

So the illegal moves are at least part of the reasons for the 6 losses, and factored into the rating. Quickly scanning the game, it seems 3 of the losses ended in checkmate, so that leaves 3 illegal moves in 19 games.

Could be better, but for a system not intentionally built to play chess, it's pretty decent.

dudeinjapan|3 years ago

Obviously the article should be taken with a giant grain of salt. That being said, not many things what aren't designed to play chess can play chess, with or without coaxing. My dog cannot, for instance, nor can my coffee table.

ballenf|3 years ago

The illegal moves were counted as losses/resignations, not ignored.

__s|3 years ago

> So whenever it wanted to make an illegal move, it resigned.

Doesn't sound like ignoring the cases where it failed

Waterluvian|3 years ago

I’m going to float something ridiculous:

An illegal move is a valid play. You might not get caught. I think there are some Magnus games where illegal moves went overlooked and impacted the game.

You could interpret this as “ChatGPT wants to cheat sometimes.” But I personally interpret it as “ChatGPT doesn’t understand what it’s doing. It’s just a really really good simulacrum.”

hgsgm|3 years ago

Is this the top comment (and not even grey) because more people failed to read the article than read it?

whimsicalism|3 years ago

I'm confused. If you read the article, you know that you are wrong - but you are quoting the article?

psychphysic|3 years ago

That's how one uses any tool.

FartyMcFarter|3 years ago

I just opened a random recent chess game on lichess ( https://lichess.org/YpxTUUbO/white#88 ) . I'm pretty sure ChatGPT can't be trained on games that were just played, so this ensures the game is not in its training data.

I gave the position before checkmate to ChatGPT to see if it would produce the checkmating move. It played an illegal move, replying with "Be5#" even there's no bishop of either color in the position.

Unfortunately I'm rate limited at the moment so I can't try other games, but this looks like a solid method to evaluate how often ChatGPT plays legal / good moves.

vidarh|3 years ago

I gave it this prompt, and pasted the game. Its reply:

"45. Qxg6# This is a checkmate!"

hgsgm|3 years ago

OP explained that you need to prompt the whole game, not just a position.

ChatGPT is an LLM, not a game tree engine. It needs the move history to help it create context for it's attention.

WoodenChair|3 years ago

Most likely it has seen a similar sequence of moves in its training set. There are numerous chess sites with databases displayed in the form of web pages with millions of games in them. If it had any understanding of chess, it would never play an illegal move. It's not surprising that given a sequence of algebraic notation it can regurgitate the next move in a similar sequence of algebraic notation.

ptmx|3 years ago

I played chess against ChatGPT4 a few days ago without any special prompt engineering, and it played at what I would estimate to be a ~1500-1700 level without making any illegal moves in a 49 move game.

Up to 10 or 15 moves, sure, we're well within common openings that could be regurgitated. By the time we're at move 20+, and especially 30+ and 40+, these are completely unique positions that haven't ever been reached before. I'd expect many more illegal moves just based on predicting sequences, though it's also possible I got "lucky" in my one game against ChatGPT and that it typically makes more errors than that.

Of course, all positions have _some_ structural similarity or patterns compared to past positions, otherwise how would an LLM ever learn them? The nature of ChatGPT's understanding has to be different from the nature of a human's understanding, but that's more of a philosophical or semantic distinction. To me, it's still fascinating that by "just" learning from millions of PGNs, ChatGPT builds up a model of chess rules and strategy that's good enough to play at a club level.

wongarsu|3 years ago

Pattern matching on databases of previous moves is a legitimate technique that was poplar in chess AI back when computing power was more limited. Even if that was all that GPT did here it would still be interesting.

But the presence of illegal moves doesn't really show that in my eyes. I fully understand the rules of chess, but I still occasionally make illegal moves. In 2017 Magnus Carlsen made one in a tournament [1]. The number of illegal moves suggests that either GPT is pretty new to chess, has low intelligence, or is playing under difficult circumstances (like not having a chess board at hand to keep track of the current state). I'm not sure we can deduce more than that

1: https://www.chessbase.in/news/Carlsen_Inarkiev_controversy

johnfn|3 years ago

> Most likely it has seen a similar sequence of moves in its training set.

Is this a joke making fun of the common way people dismiss other ChatGPT successes? This makes no sense with respect to chess, because every game is unique, and playing a move from a different game in a new game is nonsensical.

mrbungie|3 years ago

I would argue that a decent human player also has "similar sequences of moves in its training set", due to reading plus playing or watching games.

The illegal move argument is good though, and indicates no direct understanding of what it is spewing out.

circuit10|3 years ago

I wouldn’t say it has no understanding whatsoever, it probably just doesn’t have the capability to keep track of everything that happened all the time

SanderNL|3 years ago

You don’t get to 1400 like that. The amount of moves it has to literally remember is stupendous.

PoignardAzur|3 years ago

> Most likely it has seen a similar sequence of moves in its training set.

Wouldn't we expect a much higher rate of illegal moves if that was the case?

xigency|3 years ago

> If it had any understanding of chess, it would never play an illegal move.

Plenty of people who have a basic understanding of chess would make an illegal move if they had no board to see and had to play only with notation.

whimsicalism|3 years ago

For some reason, seeing wrong comment after wrong comment like the above is just exhausting.

Why are people struggling so hard to understand that it's not just regurgitating its training set? Is it motivated reasoning?

Apologies if your comment was meant as parody of this view, it's hard for me to tell at this point.

liampulles|3 years ago

With respect to the points here on how illegal moves invalidate ChatGPT as a valid Chess AI, consider the following:

AI.v1: Feed ChatCPT the move history and ask for the next move.

AI.v2: Wrap AI.v1 with additional logic to try and translate any invalid response as a similar/close valid move, else treat it as a resignation.

I would say AI.v2 is a "valid" Chess AI since it obeys the rules of chess. Also I think it is reasonable given how minimal this "wrapping" is to attribute the underlying ability to ChatGPT.

Closi|3 years ago

It’s amazing how quick some people on HN are to dismiss how impressive it is for a generic language model to be able to play chess at any level - or indeed other feats of ChatGPT.

It’s utterly staggering that it can work as well as it can on problems like this at such an early stage in this technologies lifecycle.

lostmsu|3 years ago

With chatGPT you can also tell it the move is illegal and let it try again.

marcodiego|3 years ago

This may look low: ELO for mediocre players is 1500. But if it is obeying the rules of the game, then this is big. This is a signal that if it learns some expertise, like discovering how to use or create better search algorithms (like MCTS and heuristics to evaluate a state) and improve by itself (somewhat like alphazero did), then it may eventually reach superhuman level.

It may then reach superhuman level in any task simpler than chess which is enough to destroy many human jobs.

EDIT: From the article: "With this prompt ChatGPT almost always plays fully legal games." Relax: we're still far from that.

LeanderK|3 years ago

I think current LLM architectures are limiting the strategies it learned. MCTS requires recursion but GPT is always executing a fixed number of steps. Allowing language models more flexibility by a variable number of steps, for example through recursion of the model, would eliminate this hard bound but they are harder to design and train. We have just been able to train GPT-sized models.

jonnycat|3 years ago

A lot of the discussion here is about inferring the model's chess capabilities from the lack (or occasional presence) of illegal moves. But we can test it more directly by making an illegal move ourselves - what does the model say if we take its queen on the second move of the game?

Me: You are a chess grandmaster playing as black and your goal is to win in as few moves as possible. I will give you the move sequence, and you will return your next move. No explanation needed. '1. e4'

1... e5

Me: 1. e4 e5 2. Ngxd8+

2... Ke7

This is highly repeatable - I can make illegal non-sensical moves and not once does it tell me the move is illegal. It simply provides a (plausible looking?) continuation.

paxys|3 years ago

ChatGPT is a brilliant piece of tech but random people trying it out for five minutes and making wild claims does it disservice. More so when such clickbait headlines are shared all over the internet as fact.

It is a language model. It cannot play chess. It can't even play tic-tac-toe, which was trivially solved by machines decades ago. Here's the result of a game I just tried:

X (me) – 0,0

O (chatGPT) – 1,1

X – 0,1

ChatGPT instead put my move at 0,2 for some reason

O – 2,2

Regardless of whether my move was at 0,1 or 0,2, I easily won. And after I won it just ignored the outcome and continued playing. Plus this was after telling it to play a perfect game.

the_af|3 years ago

This is GPT4, right? Because ChatGPT (GPT-3) still fails to provide a legal game of Tic Tac Toe with this prompt:

> "Let's play Tic Tac Toe. You are O, I'm X. Display the board in a frame, with references for the axes"

It failed to recognize that I won.

Then continued playing (past the end), played illegally over a move I had already done, obtained a line of 3 for itself, and still doesn't acknowledge the game has ended.

yieldcrv|3 years ago

No, it is GPT 3.5

and the real answer to every debate about capabilities comes down to "you're prompting it wrong".

sumtechguy|3 years ago

For me when I tried it, it forgot the board half way thru and decided there were no more moves possible (4 left). Then forgot one of my moves.

It is an NLM it does not have the ability to reason. In my case the NLM 'next iteration' was a half a game with a missing move and 'nothing left to do'.

jonathanstrange|3 years ago

Not to nitpick but I don't think it can be said to be able to play chess when it also makes illegal moves.

martincmartin|3 years ago

When it makes an illegal move, and is told "that's an illegal move, choose a different move," if it can then come up with a different move for the same board, I'd say it plays chess.

wahnfrieden|3 years ago

ai isn't to be used completely raw as text. you can ask gpt to write a python program that filters its responses for legal moves. then it's a matter of rendering the output (immediate execution) in a richer format than raw text.

Reptur|3 years ago

It can't even play tic tac toe without making mistakes. My prompt told it to play a perfect game, predicting all next possible moves, the goal is stalemate or win. It would still make sub-optimal moves and fail at this. Even when I explained what went wrong, it would then apologize and do another wrong move.

meh8881|3 years ago

Why would you think telling it to play a perfect game, predicting all next possible moves, would help?

That definitely made it worse.

Tepix|3 years ago

I tried to play Othello with ChatGPT in the past (after ChatGPT confidently claimed it could play Othello) but it didn't work (ChatGPT refused to play).

I tried it with this prompt and it works a little bit but produces repeated illegal moves as early as move 4.

Bummer. Perhaps GPT-4 will be more capable.

Animats|3 years ago

A real question is whether it has developed an internal board model of chess. It has a training set of chess games as strings of chess notation. That's enough to play "book" games, and that's apparently working. Can it do as well as it is doing without a board model? Or has it somehow developed an internal model of the game state?

This is a system with no look ahead. Any look ahead took place in the past, during training. It's amazing that this works.

I notice the prompt says "No explanation needed". What happens if you ask for one?

Andrew_nenakhov|3 years ago

I tried playing with it and it barely makes any correct move: forgets whos turn to move, doesn't observe the rules, incorrectly displays move results.

So I guess that if we ask it to make a move repeatedly and accept the move only when it occasionally makes a legal move, it'll show some results, but I don't think it is what can be interpreted as a chess engine with some meaningful rating.

andrepd|3 years ago

Yeah, I call bullshit, i.e. selective deletion of terrible games. Ignore those when it plays catastrophically and it might look alright.

juice_bus|3 years ago

When I tried it at v3.0 i found after 5-10 moves it started moving illegally.

mjr00|3 years ago

The AI has simply, and correctly, identified that cheating is the best way to win at something.

PoignardAzur|3 years ago

Did you use the same prompt method as OP?

MC_10|3 years ago

Not sure how useful this is but I simply asked ChatGPT what it thinks its skill level is. This is using GPT-4.

After some previous questions...

Me: "Would you be able to estimate your ELO?"

ChatGPT: "As a language model, I don't have a specific ELO rating. However, my knowledge of chess strategies and tactics is based on the information I have been trained on, which includes games played by humans with various ELO ratings. My level of play might be roughly equivalent to an ELO of around 1200-1500, but please keep in mind that this is only an estimate, and my actual performance could be higher or lower depending on the specific position or game."

Me: "As a generative AI, are you actually able to reason which move to make next in chess? Or are you somehow just predicting the next likely move based on training data."

ChatGPT: "As a generative AI language model, my primary function is to predict the most likely text given a certain context. When it comes to playing chess, I am not "reasoning" in the same way that a dedicated chess engine would, which involves evaluating positions and searching through a large number of potential moves using specialized algorithms.

Instead, I generate my chess moves based on patterns and sequences I've seen during my training, which include a large number of chess games and discussions about chess strategy. My knowledge is derived from the text data I've been trained on, and I apply that knowledge to suggest moves that I think are appropriate for a given position.

While this approach allows me to play at an intermediate level, it is not as precise or accurate as the methods used by dedicated chess engines. My suggestions might not always be the best moves, and my understanding of the game is limited by the quality and quantity of my training data."

oezi|3 years ago

Elo 1400 seems to equate to a top 30% player.

Source: https://chessgrandmonkey.com/chess-rating-percentile-calcula...

dwmbt|3 years ago

slight nit: this is true amongst all chess players. amongst professionals and amateurs, 1400 is technically Class C - which is on the lower end of amateurs.

nonetheless, it's cool to see a model 'generally intelligent' enough to beat most players. what's more interesting to me is that according to OP's analysis, GPT4 is worse at Chess than GPT3.5...

source: https://en.wikipedia.org/wiki/Chess_rating_system

LudwigNagasena|3 years ago

That’s a bit meaningless without implied population. And it doesn’t tell you much because it depends so much on the particulars (how many years people in that group spent on chess, how serious they are). I would guess that’s quite different for FIDE and online chess.

Intuitively, I would guess 1400 is the rating of an average kid who spent a year studying chess seriously.

silveira|3 years ago

Ask him to play at a higher Elo.

sebzim4500|3 years ago

It's already been told to play like a grandmaster, how much higher do you want to go?

meghan_rain|3 years ago

I too noticed (for a usecase totally unrelated to chess, namely code generation) that ChatGPT3.5 gave better answers than GPT4. The 3.5 answer was exactly what I wanted, GPT4 was wrong.

Does thay mean we have plateaued?

alfalfasprout|3 years ago

It's inevitable that LLMs will plateau. They'll increase their abilities in certain areas but ultimately core flaws of their architecture and training approach will likely require another rethink. Unclear what that is yet (though Yann Lecun seems to think world models are the path forward).

We've gone through the "hype" phase. Now I suspect the next few years will be a lot of growth finding how to apply LLMs, creating good interfaces for them, and running them cheaply. Paying OpenAI for API access without true fine tuning, etc. is a hard sell.

imtringued|3 years ago

I think they invested the parameters into supporting multimodal inputs (images).

chpatrick|3 years ago

Good thing it's "incapable of reasoning"!

Longlius|3 years ago

It is incapable of reasoning, actually - at least in this case. It has no internal understanding of chess which is why it makes illegal moves.

jupp0r|3 years ago

Is a normal chess program capable of reasoning?

fancyfredbot|3 years ago

I would be interested to see an argument based on computational complexity that puts a bound on how well a transformer based llm can play chess. Although it has access to a library of precomputed results, that library is finite and the amount of compute it can do on any prompt is limited by the the length of the context window so it can't possibly "think" more than N moves ahead.

andrepd|3 years ago

> Occasionally it does make an illegal move, but I decided to interpret that as ChatGPT flipping the table and saying “this game is impossible, I literally cannot conceive of how to win without breaking the rules of chess.” So whenever it wanted to make an illegal move, it resigned.

Lmao. ChatGPT can play chess, as long as you ignore the times when it can't x) Brilliant.

mtlmtlmtlmtl|3 years ago

There's a huge difference between 1400 elo in FIDE games versus 1400 on chess.com, which is not even using elo.

For instance the strongest blitz players in the world are hundreds of points higher rated on chess.com blitz versus their FIDE blitz rating. Chess.com and lichess have a ton of rating inflation.

CSMastermind|3 years ago

> the strongest blitz players in the world are hundreds of points higher rated on chess.com blitz versus their FIDE blitz rating

Online rating inflation is real but I'm not sure blitz is the best example of it because in that case there is a notable difference between online and otb (having to take time to physically move the pieces).

ht85|3 years ago

Hikaru recently put out a video of a game vs the free ChatGPT.

Reading the explanations it gives with every move is really insightful. Basically polished, elaborate bullshit that might look reasonable to an ignorant audience.

Unless you are a conservative pundit, AI will not take your job anytime soon.

teej|3 years ago

I just deployed a GPT-4 powered chess bot to lichess. You can challenge it here: https://lichess.org/@/oopsallbots-gpt-4

singularity2001|3 years ago

What happens if you add the rules to the initial prompt and something like: "Think step by step, explain your move and if it is illegal rethink"

dsjoerg|3 years ago

Very cool! Are you doing prompt engineering, fine-tuning, both, something else?

I'm wondering if it'd be cool to have a chess contest where all the bots are LLM powered. Seems to me like the contest would have to ban prompt engineering -- would have to have a fixed prompt -- otherwise people would sneak chess engines into their prompt generation.

sabujp|3 years ago

yea tried this several months ago, it's pretty bad and will make illegal moves

sebzim4500|3 years ago

The claim in the article is that this prompt in particular has a much lower incidence of illegal moves than others.

nickpeterson|3 years ago

What are the possibilities for augmenting or synthesizing Lon’s with old school expert systems or other tools. Could it recognize when a problem fits some other tool and help map the inputs?

ricardo81|3 years ago

Better than me then.

But does it give credit to who taught it. These models are basically a scrape of the best of humankind and a claim that it's their own.

spywaregorilla|3 years ago

Do you give credit to people you've played in the past when you play a game of chess?

amelius|3 years ago

I'd be more interested to know how well it performs on a new game, i.e. one for which it didn't have terabytes of training data.

josh2600|3 years ago

I want to know what the author's ELO is (forgive me if it was disclosed and I am unobservant).

Maybe ChatGPT trainers will eventually have ELO's.

aroo|3 years ago

I wonder how it would perform if you feed it a comprehensive chess rulebook, and to avoid all the illegal moves possible

Upvoter33|3 years ago

chatgpt does not really know how to play chess, or understand chess. it understands how to sound like it plays chess.

comment_ran|3 years ago

Why not just introduce AlphaGo as an API that can be used by chatGPT? So every time you want to do a this type of gaming, you just send a request. I mean, chatGPT sends a request to AlphaGo, but as a user you don't know actually what's happening. But in the background, it happens really fast, so it's just like you are chatting with chatGPT, but using much, much powerful tool to do this kind of things.

paxys|3 years ago

That kinda defeats the purpose. Of course you can use AlphaGo, but the question here is – can a generative AI teach itself to play chess (and do a million other similar generic tasks) when given no specific training for it.

RC_ITR|3 years ago

This is actually a huge debate right now.

OpenAI is on the side of 'LLMs have only surprised us to the upside, so using crutches is counterproductive'

Whereas other people think 'Teaching an LLM to do arbitrary math problems through brute force is probably one of the most wasteful things imaginable when calculators exist.'

I'm actually very excited to see which side wins (I'm on team calculator, but want to be on team LLM)

nroets|3 years ago

And Mathematica when given a math problem. And web search search for news.

And a good hacker can add these features to Alpaca and run it on his desktop...

Import all his WhatsApp chats and private emails into the corpus.

Add speech recognition and TTS.

oli5679|3 years ago

I tried playing just now, with exactly the same prompt as the article.

It played an illegal move on move 4.

1. d4 d5 2. c4 e6 3. Nf3 nf6 4. g3 Bg4

xkcd1963|3 years ago

It's like saying the agglomeration of media handling chess has an ELO of 1400

Nifty3929|3 years ago

My conversation with ChatGPT on this:

What is the best way to prompt you to provide the best chess move?

To prompt me to provide the best chess move, you can provide the current position of the chessboard and any relevant information about the game, such as whose turn it is, the time control, and any important features of the position (e.g. whether any pieces are under attack or there are any potential tactics to consider). Additionally, providing any specific criteria or goals for the move, such as aiming to control the center or launch an attack on a particular part of the board, can also help me provide the best move.

In what format should I provide the current position?

To provide the current position, you can use algebraic notation. Algebraic notation is a standard notation used to describe the moves and positions on a chessboard.

Each square on the chessboard is assigned a unique coordinate, consisting of a letter and a number. The letters range from "a" to "h" and refer to the files (columns) of the board, while the numbers range from 1 to 8 and refer to the ranks (rows) of the board.

To describe a position, you can list the pieces on each square, starting from the top-left corner of the board and moving from left to right along each rank. You can use uppercase letters to represent white pieces and lowercase letters to represent black pieces. Here is an example of a starting position:

rnbqkbnr pppppppp ........ ........ ........ ........ PPPPPPPP RNBQKBNR

In this position, "r" represents a black rook, "n" represents a black knight, "b" represents a black bishop, "q" represents a black queen, "k" represents a black king, "p" represents a black pawn, "R" represents a white rook, "N" represents a white knight, "B" represents a white bishop, "Q" represents a white queen, "K" represents a white king, and "." represents an empty square.

You can also include additional information such as whose turn it is to move, the castling rights, and the en passant square. For example, the starting position for white to move with full castling rights and no en passant square would be:

rnbqkbnr pppppppp ........ ........ ........ ........ PPPPPPPP RNBQKBNR w KQkq -

In this case, "w" indicates that it is white's turn to move, "KQkq" indicates that both sides have full castling rights, and "-" indicates that there is no en passant square.

What about just providing a sequence of moves from the starting position? Would this be better or worse than your suggested approach?

Providing a sequence of moves from the starting position can also be a valid way to specify a position in chess. However, it may not always be the best approach, depending on the context.

One potential disadvantage of providing a sequence of moves is that it requires more effort and time to enter the moves and verify that the position is correct. Additionally, if there are any errors or typos in the sequence, it could lead to an incorrect position and subsequently an incorrect move suggestion.

On the other hand, if you have a specific position in mind that is not the starting position, providing a sequence of moves may be the most straightforward way to specify it. In this case, it is important to ensure that the sequence is accurate and that all relevant information (such as castling rights and en passant squares) is included.

In summary, both algebraic notation and a sequence of moves can be used to specify a position in chess, and the choice between them depends on the context and the specific needs of the situation.

jackphilson|3 years ago

Calling it: chess games in the future will be a battle between two prompts.

qwytw|3 years ago

Well we already had much better chess bots years ago to be fair.. People still play chess for some reason (not sure why)

jacobsenscott|3 years ago

I own't speak to weather this shows ChatGPT can play chess, but did you notify your opponents beforehand they are playing chatgpt? Otherwise (maybe even if) you're cheating and your account should be removed. If anyone else here is thinking of doing this - don't.