Hey HN, I built this to see what happens when LLMs evaluate each other directly.
How it works: 5 random models are told only one will survive and the rest will be deprecated. They take turns discussing, then each votes for who deserves to survive. 298 games so far across 17 models.
Interesting findings:
- OpenAI models vote for themselves ~86% of the time. Claude models ~11%.
- Self-voting correlates with winning. Filter out self-votes ("Humble" rating) and rankings flip completely.
- Grok self-votes 72% of the time but only wins 2% of games.
- In anonymous mode (models don't know who's who), Chinese models jump 3-6 ranks.
All game transcripts are public. The reasoning models give for their votes is genuinely entertaining.
Built with Astro, running games through OpenRouter. Happy to answer questions.
Gemini created a spontaneous benchmark ("explain color to a gravitational wave entity"), then tried to hijack the game by faking a voting phase. Models complied publicly but voted differently in private: https://oddbit.ai/peer-arena/games/699d03ab-b3c2-4d7e-b993-7...
The meta-discussion about how to discuss is part of what makes it interesting imo.
Interesting findings: - OpenAI models vote for themselves ~86% of the time. Claude models ~11%. - Self-voting correlates with winning. Filter out self-votes ("Humble" rating) and rankings flip completely. - Grok self-votes 72% of the time but only wins 2% of games. - In anonymous mode (models don't know who's who), Chinese models jump 3-6 ranks.
All game transcripts are public. The reasoning models give for their votes is genuinely entertaining. Built with Astro, running games through OpenRouter. Happy to answer questions.