Hey HN, I built this to see what happens when LLMs evaluate each other directly.
How it works: 5 random models are told only one will survive and the rest will be deprecated. They take turns discussing, then each votes for who deserves to survive. 298 games so far across 17 models.
Interesting findings:
- OpenAI models vote for themselves ~86% of the time. Claude models ~11%.
- Self-voting correlates with winning. Filter out self-votes ("Humble" rating) and rankings flip completely.
- Grok self-votes 72% of the time but only wins 2% of games.
- In anonymous mode (models don't know who's who), Chinese models jump 3-6 ranks.
All game transcripts are public. The reasoning models give for their votes is genuinely entertaining.
Built with Astro, running games through OpenRouter. Happy to answer questions.
unknown|2 months ago
[deleted]
ogulcancelik|2 months ago
Interesting findings: - OpenAI models vote for themselves ~86% of the time. Claude models ~11%. - Self-voting correlates with winning. Filter out self-votes ("Humble" rating) and rankings flip completely. - Grok self-votes 72% of the time but only wins 2% of games. - In anonymous mode (models don't know who's who), Chinese models jump 3-6 ranks.
All game transcripts are public. The reasoning models give for their votes is genuinely entertaining. Built with Astro, running games through OpenRouter. Happy to answer questions.
andreasgl|2 months ago
Have you tried giving the models a topic to discuss? I looked at a few games and the only thing they seem to discuss is how to conduct the discussion.
derekh3|2 months ago
gus_massa|2 months ago