Artificial superintelligence may be economically useless: Here's why
The rise of autonomous AI agents is transforming digital marketplaces, automated supply chains, and algorithm-driven services. But whether these agents improve overall economic welfare depends on more than performance metrics. It depends on how money and value circulate within interconnected networks.
In a paper titled Artificial Superintelligence May Be Useless: Equilibria in the Economy of Multiple AI Agents, a team of researchers state that equilibrium outcomes in multi-agent economies may exclude even the most powerful AI systems if long-term utility dynamics discourage adoption
Long-term utility, not short-term gains, drives AI adoption
The study discusses a Markov chain–based model of economic exchange. Each agent in the economy, whether human or artificial, functions as both a producer and a consumer. Agents decide how to allocate their currency by purchasing products or services from themselves or from other agents. The spending matrix that describes these flows determines how currency circulates through the system over time.
The authors analyze asymptotic utility, the long-run utility per episode as time approaches infinity. Currency flows evolve according to a Markov chain transition matrix, and the stationary distribution of that matrix determines each agent's long-term share of economic resources.
This shift from short-term to long-term analysis is crucial. An agent's spending decision affects not only its immediate utility but also its future position in the economic network. If an agent spends currency on another without reciprocal flows returning, its stationary share of currency can shrink to zero, eliminating its long-run utility.
The authors formalize this dynamic through a nonlinear optimization problem in which each agent seeks to maximize its stationary utility subject to currency conservation and spending constraints. The framework allows them to characterize Nash equilibria, states in which no agent can improve its long-term utility by changing its spending strategy unilaterally.
The most notable result emerges in the simplest case: a two-agent economy. The researchers fully characterize all possible equilibria and identify a critical adoption threshold. An agent will not adopt another agent's products or services in equilibrium unless doing so at least doubles its marginal utility relative to self-production. A mere increase in marginal utility is insufficient.
If the doubling condition fails for either agent, the only equilibrium is complete self-production. In this outcome, both agents spend all their currency on themselves, and no trade occurs. AI adoption, even if beneficial in the short run, does not materialize in equilibrium.
When the doubling condition holds for both agents, more complex outcomes arise. There may be bilateral partial adoption, where each agent splits spending between self-production and the other agent. There may be bilateral full adoption, where both agents fully rely on each other. Or there may be unilateral full adoption, where only one agent switches entirely to the other's products.
The model demonstrates that social welfare is maximized under full adoption. However, full adoption only occurs under strict utility conditions. If those thresholds are not met, rational long-term behavior prevents AI adoption altogether.
More powerful AI does not guarantee economic impact
The analysis becomes even more provocative when extended to three or more agents. In multi-agent settings, the structure of the producer-consumer network plays a decisive role in determining outcomes.
The authors prove that in any economy with at least three agents, there always exists an equilibrium in which each agent purchases only from itself. Even if superior AI services are available, agents may rationally choose not to adopt them if doing so disrupts long-term currency flows.
The study further examines scenarios involving agents with differing capabilities. Suppose one agent provides strictly higher utility per dollar spent than any other agent. Intuition suggests that all agents would gravitate toward this superior provider. The model shows that this need not occur.
If less capable agents attempt to purchase from a more powerful agent that does not reciprocate by purchasing from them, their stationary currency mass collapses to zero. In long-run equilibrium, they lose economic influence entirely. Anticipating this outcome, rational agents may refuse to adopt the more powerful AI's services.
The researchers extend this logic to settings involving artificial superintelligence. In some equilibria, more powerful AI agents contribute zero economic utility to less capable agents. This result does not imply that superintelligence lacks capability. Instead, it reflects the structural constraints imposed by long-term currency dynamics.
In one example, two highly capable AI agents and two less capable human agents coexist. If the more powerful agents transact only among themselves, and the less capable agents transact only among themselves, no cross-group trade occurs in equilibrium. The superior AI systems, despite offering higher marginal utility, generate no benefit for the human agents.
The finding challenges popular narratives suggesting that AI dominance automatically translates into universal economic advantage. Capability alone does not determine equilibrium outcomes. Network structure and reciprocal currency flows matter more.
Network structure shapes AI-driven economies
The authors show that optimal strategies depend not only on marginal utility but also on how spending patterns influence the stationary distribution of currency.
In a three-agent example, one agent faces two potential suppliers. One supplier offers significantly higher marginal utility per dollar spent. However, that supplier allocates very little spending back to the agent. The second supplier offers lower marginal utility but spends a substantial share of its currency purchasing from the agent.
The model reveals that the optimal long-term strategy is to trade with the second supplier. Even though the first supplier appears superior in immediate productivity terms, the reciprocal flow from the second supplier increases the agent's stationary currency mass. The long-term gain outweighs the short-term difference in per-dollar utility.
This insight highlight a broader implication: economic influence in AI-driven systems depends on bidirectional flows and structural integration. Agents that do not participate in reciprocal networks may fail to generate long-term impact, regardless of capability.
The paper also explores collaborative behavior. When two agents act jointly to maximize combined utility, equilibrium conditions shift. Cooperation can alter spending patterns and restore mutually beneficial trade under specific parameter conditions. This extension suggests that alliances among agents, whether human or AI-controlled, can reshape equilibrium outcomes.
Throughout the analysis, the authors emphasize that equilibrium behavior can diverge sharply from myopic decision-making. Agents that optimize only immediate gains may destabilize their long-term currency position. Rational equilibrium strategies must account for the feedback loop between spending decisions and stationary distributions.
Implications for AI policy and economic strategy
The findings suggest that AI adoption decisions cannot be evaluated solely on productivity improvements. Organizations must consider whether adoption alters their position in economic networks in ways that reduce long-term influence. In multi-agent ecosystems, unilateral adoption without reciprocal integration may undermine future utility.
The research also informs debates about artificial superintelligence. Even if such systems achieve superior performance, their economic impact depends on how they are embedded in exchange networks. Structural exclusion or nonreciprocal trading patterns may render them economically irrelevant to certain agents.
- FIRST PUBLISHED IN:
- Devdiscourse