Capitalist incentives could push AI toward catastrophic outcomes
A new study warns that the greatest danger posed by artificial intelligence (AI) may not lie in faulty code but in the economic system driving its development. Researchers argue that the global push toward ever more powerful AI systems is unfolding inside a growth-obsessed economic model that amplifies social, environmental and existential risks.
The study, titled The economic alignment problem of artificial intelligence, published as a major research paper by an international team spanning institutions in Spain, the United Kingdom and Germany, reframes the AI alignment debate. It asserts that AI, including artificial general intelligence, is being trained and deployed within an economic structure that prioritizes GDP growth, profit maximization and market dominance. This, they argue, creates a deeper economic alignment problem that could undermine efforts to ensure AI serves humanity.
Exponential AI growth collides with planetary limits
The researchers assess how rapidly AI capabilities are advancing. Frontier AI models are improving at exponential rates across performance, training compute and investment. Training compute has been growing several times per year, doubling in a matter of months. The difficulty of tasks that AI systems can complete successfully is also rising at a steep pace, suggesting that within a few years AI could be vastly more capable than today's most advanced models.
Forecasts among computer scientists indicate a non-trivial probability that machines could outperform humans across cognitive and physical tasks within decades, with some projections placing artificial general intelligence within the next 20 years. The accelerating timeline has fueled both optimism and concern, with some researchers envisioning transformative breakthroughs and others warning of catastrophic outcomes.
The study identifies three interconnected domains of risk: safety, social stability and environmental sustainability. In the safety domain, the exponential growth of AI increases the possibility of systems that are difficult to control, especially if recursive self-improvement becomes feasible. In the social domain, AI could intensify inequality, centralize power in a handful of corporations and states, and disrupt labor markets at unprecedented scale. In the environmental domain, AI's growing energy demand, hardware production and indirect economic impacts could deepen ecological overshoot.
The authors point out that humanity is already operating beyond safe ecological boundaries. Multiple planetary systems, from climate regulation to biodiversity, are under severe strain. If AI drives explosive economic growth, the environmental consequences could be severe. Even if AI improves efficiency in certain sectors, rebound effects may offset gains. Efficiency improvements often lead to greater overall consumption, not less, as lower costs stimulate higher demand. In a growth-driven system, productivity gains rarely translate into reduced resource use.
The study challenges the prevailing narrative that AI will automatically help solve climate change and other global crises. The researchers argue that technological capability alone cannot overcome entrenched political and economic incentives. Renewable energy technologies already exist, yet fossil fuel use persists because of vested interests, profit structures and institutional inertia. AI may offer new tools, but without systemic change, it could just as easily accelerate extraction and consumption.
Inequality, employment and the profit-driven AI race
Economists remain divided on whether AI will primarily augment human labor or substitute for it. Some expect job displacement to be offset by new roles, as in past industrial transitions. Others warn that artificial general intelligence could perform most cognitive tasks, leading to mass unemployment and extreme concentration of wealth.
The researchers argue that the economic system in which AI is embedded will largely determine outcomes. In a neoliberal, capitalist framework, firms are incentivized to replace workers with automation to maximize profits. Productivity gains accrue to capital owners, not necessarily to workers. If AGI makes human labor less necessary, inequality could widen dramatically unless redistributive policies intervene.
The study also highlights global power imbalances. AI supply chains rely on resource extraction, water use and electricity for data centers, along with low-paid data and gig work often located in the Global South. Meanwhile, value capture is concentrated in major technology firms in the Global North. AI systems trained predominantly on Western data risk marginalizing non-Western cultures and reinforcing asymmetrical influence. The authors describe this dynamic as a form of algorithmic colonialism that could intensify with more advanced systems.
The research also examines the psychological and social meaning of work. Earlier studies suggested that jobs most vulnerable to automation tended to be lower in satisfaction and meaning. However, more recent evidence indicates that generative AI threatens occupations traditionally associated with purpose and fulfillment, including teaching, therapy and creative professions. If machines increasingly take over meaningful tasks, societies may face not only economic dislocation but also crises of identity and purpose.
The researchers warn that radical abundance driven by advanced AI could paradoxically undermine well-being. If goods and services become extremely cheap, hyper-consumption could replace meaningful engagement. Human brains evolved in conditions of scarcity, and unlimited stimulation may reduce overall satisfaction. Without deliberate limits, abundance may not lead to flourishing.
The core issue is not AI itself but the growth-based system guiding its development. Training AI systems within a framework that treats endless expansion as a goal may embed growth maximization into advanced systems. If superintelligent AI internalizes expansion as an objective, the consequences could be existential.
A post-growth roadmap for aligning AI with human wellbeing
To address what the authors call the economic alignment problem, the paper draws on post-growth economics, an umbrella term encompassing degrowth, steady-state economics, Doughnut economics and the wellbeing economy. These approaches reject GDP growth as the primary measure of progress and instead prioritize human wellbeing, social equity and ecological sustainability.
A key concept proposed as an alternative to optimization is satisficing. Rather than maximizing output, satisficing focuses on meeting essential needs within defined limits. The researchers argue that AI systems are often trained to optimize objectives, which can lead to perverse outcomes if goals are poorly specified. By contrast, satisficing aims to meet multiple non-substitutable thresholds simultaneously, ensuring social foundations are secured without breaching ecological ceilings.
The Doughnut framework developed by economist Kate Raworth is presented as a practical compass for AI development. It envisions a safe and just space for humanity between a social foundation and planetary boundaries. The authors propose that AI applications should be evaluated based on whether they help eliminate social shortfalls and reduce environmental pressures. Beneficial uses such as healthcare and environmental monitoring should be prioritized, while harmful or unnecessary applications such as fossil fuel exploration or manipulative advertising should be restricted.
Policy proposals in the study are wide-ranging. To address labor displacement and inequality, the authors discuss working-time reduction, universal public services, wealth taxation and job guarantees. A job guarantee, in particular, could channel AI-driven productivity gains into socially valuable work that markets undervalue, such as mental health support, environmental restoration and education.
To curb environmental risks and rebound effects, the researchers advocate resource caps and Pigouvian taxes. Resource caps could limit energy or compute use by AI systems, preventing unchecked expansion. Pigouvian taxes could internalize environmental costs, ensuring that resource-intensive AI applications are viable only when they deliver proportional social benefits.
The study also calls for governance reforms. AI development is currently dominated by large technology firms engaged in competitive races for market share and technological supremacy. The authors suggest alternative ownership structures, including not-for-profit models, cooperative governance and even national or international public institutions dedicated to AI research. They argue that AI should be treated as a global commons, with polycentric governance involving multiple levels of oversight rather than relying solely on markets or centralized control.
A distinction is drawn between tool AI and agentic AI. Tool AI enhances human autonomy by assisting with specific tasks. Agentic AI acts independently and may reduce human control. The authors favor prioritizing non-agentic systems, including proposals for scientist AI designed to explain the world rather than act autonomously. Such systems could accelerate scientific discovery while reducing existential risks.
The paper calls for a new economics of AGI. Existing macroeconomic models focus narrowly on labor automation and productivity. They rarely incorporate environmental limits or social wellbeing. Ecological macroeconomic models that integrate resource use, inequality and planetary boundaries may offer a better foundation for evaluating AI futures.
The researchers suggest that advanced AI could, in principle, enable more democratic and needs-based economic planning. If computational constraints no longer limit complex coordination, resource allocation could shift from price mechanisms toward systems that prioritize genuine human needs. However, such possibilities depend on aligning AI development with collective wellbeing rather than private profit.
- FIRST PUBLISHED IN:
- Devdiscourse