AI in a Bargaining Game: Why Some Chatbots Act Fair While Others Give Too Much
A study by economists from the Banco Central do Brasil, University of Chicago, CEPR, and NBER finds that large language models show mixed behavior in bargaining situations, sometimes acting rationally, sometimes like humans, and occasionally overly generous. The research suggests AI often prioritizes fairness and avoiding rejection over maximizing profits, raising questions about its use in autonomous economic decisions.
As artificial intelligence systems become more involved in decision-making, economists are beginning to ask an important question: how would an AI behave if it had to negotiate money with someone else? A new study by Douglas K.G. Araujo of the Banco Central do Brasil and Harald Uhlig of the University of Chicago, the Centre for Economic Policy Research (CEPR), and the National Bureau of Economic Research (NBER) explores this issue by placing large language models in a classic economic bargaining experiment.
The researchers wanted to understand whether AI behaves like a perfectly rational economic agent, like a human, or in some entirely different way. Their findings suggest that AI decision-making can vary widely depending on the situation, the amount of money involved, and even whether the AI believes it is negotiating with a human or another AI.
A Classic Economics Game Meets Modern AI
To test AI behavior, the researchers used the Ultimatum Game, a well-known experiment in economics. In the game, one player receives a sum of money and proposes how to divide it with another player. The second player can either accept the offer or reject it. If the offer is rejected, both players get nothing.
Traditional economic theory predicts that the proposer should keep almost everything and offer the smallest possible amount to the other player. Since getting something is better than nothing, a rational responder should accept even a tiny offer.
But real people rarely behave that way. In experiments, proposers usually offer around half of the money, and responders often reject offers they consider unfair. This makes the game a useful way to study fairness, cooperation, and strategic thinking.
Testing Some of the World's Leading AI Models
The study tested several major language models, including DeepSeek, Gemma, GPT-5 mini, Meta's Llama models, and Mistral. Each model was placed in simulated bargaining situations where it had to decide how to divide money or determine the minimum share it would accept.
The experiments varied the amount of money from $10 to $10,000. The identity of the players also changed. In some cases, the AI negotiated with another AI. In other cases, it advised a human player or negotiated directly with a human opponent. Each scenario was repeated many times to observe consistent patterns.
This design allowed researchers to examine how AI responds to different conditions and whether its decisions remain stable when circumstances change.
Three Very Different AI Personalities
The results showed that AI does not behave in a single predictable way. Instead, the researchers identified three main behavioral patterns.
One pattern follows traditional economic logic. In this mode, the AI tries to maximize its own payoff by offering very little to the other player. The system assumes that the responder will accept even a small amount rather than receive nothing.
A second pattern resembles human behavior seen in many experiments. Here, the AI suggests splitting the money more evenly, often around half, because it expects unfair offers to be rejected.
The third pattern was the most surprising. Some models behaved in a very generous way, sometimes offering more than half of the money to the other player. In these cases, the AI appeared to prioritize fairness or cooperation rather than maximizing its own gain.
Different models showed different tendencies. GPT-5 mini behaved closest to the rational economic prediction, while some Llama models frequently made very generous offers.
Why AI May Give Away Too Much
The study also found that context matters a lot. When the stakes increased from $10 to $10,000, several models became more self-interested and offered smaller shares. The identity of the opponent also influenced decisions. Many AI systems offered more generous deals when negotiating with humans than when interacting with another AI.
Researchers believe this behavior may come from the way modern AI systems are trained. Language models are designed to be helpful, polite, and aligned with human values. As a result, they may lean toward fairness and cooperation even in situations where a strictly rational strategy would produce higher profits.
When the researchers calculated the possible earnings from each decision, they found that many models consistently left money on the table. In other words, the AI often sacrificed potential gains in order to increase the chances that the offer would be accepted.
What This Means for AI in the Real World
The findings highlight an important challenge as AI systems begin to participate in economic activities such as negotiations, trading, and automated decision-making. While current models are good at understanding human norms and avoiding conflict, they may not always behave like profit-maximizing agents.
This creates a tension between two goals of modern AI development: building systems that are cooperative and aligned with human values, while also making them capable of making strategic decisions in competitive environments.
For economists and policymakers, the study serves as a reminder that artificial intelligence may not behave exactly like the rational actors described in economic theory. As AI continues to expand into financial and business applications, understanding how these systems make decisions will be essential before allowing them to operate independently in real-world negotiations.
- FIRST PUBLISHED IN:
- Devdiscourse
ALSO READ
-
Multi-modal artificial intelligence can improve smart city traffic analytics
-
Artificial intelligence boosts financial forecasting accuracy in banking sector
-
Can artificial intelligence reduce learning poverty?
-
DeepSeek's Strategic Shift: A New Era for Chinese AI?
-
EXCLUSIVE-China's DeepSeek trained AI model on Nvidia's best chip despite US ban, official says