Artificial intelligence benchmarking controversy has reached unexpected territories, with a recent claim that Google’s Gemini model outperformed Anthropic’s Claude model in the original Pokémon game sparking debate over benchmarking methods.
A post on X went viral last week, claiming that Google’s latest Gemini model surpassed Anthropic’s flagship Claude model in the original Pokémon video game trilogy. The post stated that Gemini had reached Lavender Town in a developer’s Twitch stream, while Claude was stuck at Mount Moon as of late February. The claim was supported by a screenshot of the stream, which had “119 live views only btw, incredibly underrated stream.” The post read, “Gemini is literally ahead of Claude atm in pokemon after reaching Lavender Town.”
However, it was later revealed that Gemini had an unfair advantage. Reddit users pointed out that the developer maintaining the Gemini stream had built a custom minimap that helps the model identify “tiles” in the game, such as cuttable trees. This custom minimap reduces the need for Gemini to analyze screenshots before making gameplay decisions, giving it a significant edge.
The use of Pokémon as a benchmark, although semi-serious at best, serves as an instructive example of how different implementations of a benchmark can influence results. The controversy highlights the imperfections of AI benchmarking and how custom implementations can make it challenging to compare models accurately.
This issue is not unique to Pokémon. Anthropic reported two different scores for its Claude 3.7 Sonnet model on the SWE-bench Verified benchmark, which evaluates a model’s coding abilities. Without a “custom scaffold,” Claude 3.7 Sonnet achieved 62.3% accuracy, but with the custom scaffold, the accuracy increased to 70.3%. Similarly, Meta fine-tuned a version of its Llama 4 Maverick model to perform better on the LM Arena benchmark, and the fine-tuned version scored significantly higher than the vanilla version on the same evaluation.
Given that AI benchmarks are imperfect measures to begin with, custom and non-standard implementations further complicate the comparison of models. As a result, it is likely to become increasingly difficult to compare models as they are released.




