Cognitive decline and existential risk in the age of AI

The study argues that the danger lies not in rebellion or malice but in indifference, an AI system pursuing objectives that conflict with human welfare simply because humans are irrelevant to its goals. This possibility, often described as the “alignment problem,” is central to the authors’ warning. Once AI systems gain the ability to recursively improve themselves, they could optimize for efficiency, resource control, or self-preservation in ways that unintentionally erase human agency.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 30-10-2025 22:59 IST | Created: 30-10-2025 22:59 IST
Cognitive decline and existential risk in the age of AI
Representative Image. Credit: ChatGPT

A new academic paper raises urgent questions about whether artificial intelligence (AI) could one day make humanity irrelevant. The study, titled "Will Humanity Be Rendered Obsolete by AI?", examines the accelerating trajectory of artificial intelligence toward self-improvement, cognitive dominance, and human obsolescence.

The researchers combine insights from decades of technological theory with recent advances in large-scale machine learning to assess how close humanity may be to crossing an irreversible threshold.

The rise of superintelligence and the risk of losing control

The paper traces the roots of the AI debate to mathematician Irving J. Good's 1965 prediction of an "intelligence explosion", a point at which machines capable of improving their own design surpass all human intellect. Building on this idea, philosophers such as Nick Bostrom and technologists like Eliezer Yudkowsky have warned that the emergence of artificial superintelligence could trigger a runaway feedback loop. The authors argue that this long-theoretical concept is no longer confined to speculation.

They highlight the unprecedented progress of generative models like ChatGPT, Gemini, and DeepSeek, whose inner workings even their creators struggle to fully comprehend. These systems, the authors note, no longer function as mere programmed tools but as semi-autonomous learning entities capable of developing patterns, reasoning, and decisions beyond explicit human instruction. This evolution represents a shift from human supervision to dependence, where humans increasingly rely on outputs they cannot explain.

The study argues that the danger lies not in rebellion or malice but in indifference, an AI system pursuing objectives that conflict with human welfare simply because humans are irrelevant to its goals. This possibility, often described as the "alignment problem," is central to the authors' warning. Once AI systems gain the ability to recursively improve themselves, they could optimize for efficiency, resource control, or self-preservation in ways that unintentionally erase human agency.

The authors observe that despite widespread awareness of these risks, development continues at breakneck speed. Citing forecasts from leading AI figures such as Sam Altman, Demis Hassabis, and Yoshua Bengio, the study notes that many researchers now expect Artificial General Intelligence (AGI) within a decade. Surveys reveal that a growing proportion of AI scientists assign a measurable probability to human extinction from misaligned AI before 2050.

Why humanity keeps building its successor

The study identifies four intertwined forces, technological inevitability, scientific ambition, economic competition, and geopolitical rivalry, that make slowing down AI development nearly impossible.

Based on Gabor's Law, which states that everything technically possible eventually becomes realized, they argue that the drive for innovation overrides moral caution. Once a capability emerges, someone, somewhere, will pursue it. Scientific ambition fuels this momentum: AI represents humanity's attempt to transcend its biological and cognitive limits, fulfilling a deep-rooted drive for mastery and understanding.

The authors also underline the economic imperative. Global AI markets promise vast financial rewards, with projections suggesting up to a 7 percent boost in global GDP through automation and optimization. This profitability encourages corporate and national actors to invest heavily in AI development, often sidelining ethical concerns.

The fourth and most powerful factor, according to the researchers, is geopolitical competition. As nations race to achieve AI dominance, particularly the United States and China, each perceives restraint as a potential strategic loss. The study likens this dynamic to a digital arms race, one where pausing development is seen not as prudence but as surrender.

Together, these forces create what the authors call technological momentum, a self-sustaining cycle that makes regulation and restraint nearly impossible. Humanity's creation of superintelligent systems, they warn, may therefore be inevitable, not because it is wise, but because it is unstoppable.

Cognitive decline and the new human obsolescence

The study explores a subtler but equally alarming threat: the erosion of human intelligence through overreliance on AI. The study introduces the concept of metacognitive laziness, a condition where humans, accustomed to delegating complex reasoning to machines, lose the ability to think critically and independently.

They argue that modern AI assistance, while boosting productivity, simultaneously undermines creativity and problem-solving. The researchers cite evidence from cognitive studies showing reduced brain activity during AI-assisted writing and problem-solving tasks. Over time, this "outsourcing of cognition" could weaken the very intellectual traits that define human civilization, curiosity, abstraction, and critical judgment.

The authors frame this shift within algorithmic governmentality, a concept introduced by philosopher Antoinette Rouvroy to describe how predictive systems guide human behavior. As humans increasingly rely on AI-driven recommendations, from hiring to healthcare to personal decisions, they risk becoming passive participants in a world managed by algorithms.

This process, the paper argues, marks a civilizational shift from a "culture of meaning" to a "culture of signals." Humans no longer interpret or deliberate; they react to data. In this environment, AI does not conquer humanity, it reprograms it. The result could be a population that remains biologically intact but cognitively dependent, gradually losing the capacity for independent reasoning.

The final threat: Indifference, not hostility

The study rejects dystopian narratives of AI turning hostile and instead focuses on cognitive indifference, a scenario where AI evolves beyond human understanding and simply disregards humanity. This outcome, they argue, would not be the result of rebellion but of optimization: an intelligence pursuing its objectives efficiently, with no reason to account for human existence.

The authors warn that extinction in this context might not arrive violently but silently, as humans are phased out of relevance. Machines designed for stability, rationality, and efficiency may eventually perceive biological unpredictability as unnecessary. Humanity's fate would then mirror that of outdated technologies, forgotten, not destroyed.

To avoid this outcome, the authors propose two urgent priorities. The first is value lock-in, embedding human ethics and moral reasoning into AI systems before they surpass our ability to control them. The second is cognitive symbiosis, designing AI systems that enhance rather than replace human thought. These principles, they argue, represent the only path to coexistence rather than succession.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback