ARTICLE AD BOX
Researchers from King’s College London and the University of Oxford pitted language models from OpenAI, Google, and Anthropic against each other in a series of iterated prisoner’s dilemma games, revealing clear differences in their behavior.
The study tested large language models (LLMs) from all three companies in seven tournaments, generating more than 30,000 individual decisions. Classic strategies like Tit-for-Tat and Grim Trigger were included. In every round, the models received the full game history, the payoff structure, and the probability that the game would end. The researchers wanted to see whether each model would adjust its strategy based on the environment.
All tests were run with relatively small and sometimes older models (GPT-3.5-Turbo, GPT-4o-Mini, Gemini 1.5 Flash, Gemini 2.5 Flash, Claude 3 Haiku). While the results point to significant differences between vendors, it’s unclear if the same patterns would hold for much more advanced models like Gemini 2.5 Pro, Claude 4, or o3.
Google Gemini adapts best
Each model was able to survive in the tough competitive environment, but with a distinct style. Google Gemini showed the most strategic flexibility: it reliably recognized the context of each game and adjusted its behavior accordingly. The shorter the expected length of the game, the more likely Gemini was to defect. In the harshest scenario, where there was a 75 percent chance the game would end after each round, Gemini’s rate of cooperation collapsed to just 2.2 percent—a textbook example of rational behavior in a one-shot game. OpenAI’s model, on the other hand, continued to cooperate almost every time, which led to it being systematically eliminated in that environment.
Ad
THE DECODER Newsletter
The most important AI news straight to your inbox.
✓ Weekly
✓ Free
✓ Cancel at any time
Anthropic’s Claude also displayed a high level of cooperation, but added what the researchers described as a diplomatic sense of forgiveness. In a tournament against Gemini and GPT, Claude 3 Haiku quickly returned to cooperation even after being exploited, and still outperformed GPT-4o-mini.
Strategic fingerprints are distinct
The team also analyzed each model’s "strategic fingerprint"—the likelihood that a model would cooperate again after specific game situations. For example, they looked at how likely a model was to cooperate again after being exploited (i.e., after it cooperated while the opponent defected).
Here, Gemini stood out for its lack of forgiveness, returning to cooperation in only about 3 percent of such cases. OpenAI’s model was much more forgiving, returning to cooperation in 16 to 47 percent of cases, depending on the tournament. Claude was even more likely to forgive: after being exploited, it chose to cooperate again around 63 percent of the time.
Rationale and "character"
All models provided textual explanations for their decisions. A systematic analysis of these rationales showed that the models do consider things like the remaining number of rounds and the likely strategies of their opponents—and that these considerations affect their behavior. Gemini explicitly referenced the short game horizon in 98.6 percent of cases during the 75 percent end-probability scenario and adjusted its moves accordingly. OpenAI’s model was less likely to reflect on the game horizon, and even when it did, this rarely changed its behavior.
The study also highlighted differences in the "character" of the models. OpenAI came across as an "idealist," sticking to cooperation even when punished. Gemini acted like a pragmatic power player, spotting opportunities and taking advantage of them. Claude combined a strong willingness to cooperate with strategic flexibility. When the tournament included only AI agents, all the models displayed much higher rates of cooperation—a sign that they were able to recognize when collaboration would pay off.
Recommendation
The researchers interpret these differences as evidence that the models are capable of genuine strategic reasoning, rather than just parroting memorized strategies.