The rise of ‘Algority’: How people are letting AI decide for them


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 06-02-2026 11:50 IST | Created: 06-02-2026 11:50 IST
The rise of ‘Algority’: How people are letting AI decide for them
Representative Image. Credit: ChatGPT

A new peer-reviewed study raises fresh concerns that users may be granting artificial intelligence a form of epistemic authority, subtly reshaping how responsibility, trust, and judgment are distributed between humans and machines.

The findings are presented in the study Perceiving AI as an Epistemic Authority or Algority: A User Study on the Human Attribution of Authority to AI, published in the journal Machine Learning and Knowledge Extraction.

Based on responses from 610 participants, the research examines when individuals are willing to defer to AI systems across different decision-making contexts and why such deference occurs. The study introduces the concept of "algority" to describe a person's tendency to treat algorithmic recommendations as superior or unquestionable, even in areas traditionally governed by human expertise and moral reasoning.

Trust in automation drives acceptance of AI authority

The researchers focus on how trust in automation, belief in AI performance, and attitudes toward authority influence whether people accept AI recommendations, reject them, or prefer hybrid human–AI decision models.

The results show that trust in automated systems is the strongest predictor of algorithmic deference. Participants who expressed higher levels of trust in automation were significantly more likely to endorse AI involvement across all tested domains, including credit scoring, job matching, medical triage, and criminal sentencing. In many cases, high-trust participants favored AI-supported or AI-led decisions even when scenarios involved ethical complexity or potential harm to individuals.

Belief in the technical superiority of AI also played a role, though its influence varied by context. Participants who viewed AI as objective, precise, and less biased than humans were more inclined to accept its recommendations, particularly in data-intensive tasks such as financial risk assessment or employment screening. However, belief in AI performance alone was not sufficient to explain algorithmic authority. Without trust in automation as a broader system, confidence in AI's technical capabilities did not consistently translate into acceptance of its decisions.

The study further finds that acceptance of AI authority is not uniform across domains. Participants were more comfortable granting AI influence in administrative or predictive tasks than in decisions involving moral judgment or punishment. Even among those who trusted automation, support for fully autonomous AI decision-making declined in scenarios involving criminal justice or life-altering consequences. This pattern suggests that while algorithmic authority is expanding, it remains bounded by contextual and ethical considerations.

Moral attitudes shape resistance to full AI decision-making

The researchers distinguish between trust in institutions, respect for authority, and openness to delegation of judgment, finding that these factors interact in complex ways when AI enters decision-making processes.

Participants with strong deference to traditional authority structures were not automatically more willing to accept AI authority. In morally sensitive contexts, such as criminal sentencing, some participants resisted AI involvement precisely because they viewed moral judgment as a human responsibility that should not be delegated to machines. This resistance was strongest among participants who emphasized accountability, empathy, and contextual reasoning as essential components of legitimate authority.

The study also highlights the appeal of hybrid decision models, where AI provides recommendations while humans retain final authority. Across nearly all scenarios, hybrid models received the highest levels of support, outperforming both fully human and fully automated decision-making systems. This preference reflects a desire to balance efficiency and consistency with human oversight and ethical responsibility.

Importantly, the authors caution that hybrid systems do not automatically prevent algorithmic authority from dominating outcomes. If users treat AI recommendations as inherently superior or objective, human oversight risks becoming symbolic rather than substantive. In such cases, responsibility may still shift away from humans, even when decision structures formally preserve human involvement.

The concept of "algority" is introduced to capture this dynamic. Algority describes not the technical power of AI systems, but the human tendency to grant them epistemic dominance. The study argues that this tendency can emerge gradually, reinforced by repeated exposure to AI outputs that appear accurate, confident, and data-driven. Over time, users may internalize the assumption that algorithms know best, reducing critical engagement and weakening accountability mechanisms.

Implications for governance, accountability, and AI design

One key risk identified by the study is responsibility diffusion. When AI systems are treated as authoritative, decision-makers may attribute outcomes to the algorithm rather than to human judgment, even when humans remain formally responsible. This dynamic complicates accountability, particularly in sectors such as healthcare, finance, and criminal justice, where decisions can have lasting consequences for individuals and communities.

The study calls for explainability and transparency in AI system design. When users understand how recommendations are generated and where limitations lie, they are more likely to engage critically rather than defer automatically. Conversely, opaque systems that present outputs without context may encourage uncritical acceptance, especially among users predisposed to trust automation.

The authors also highlight the need for education and organizational safeguards that reinforce the advisory role of AI. Training programs that emphasize critical evaluation of algorithmic outputs, rather than passive acceptance, can help counteract the growth of algorithmic authority. Institutional policies should clarify decision responsibility and ensure that human oversight remains meaningful rather than procedural.

From a regulatory perspective, the findings support calls for governance frameworks that address not only technical risks but also behavioral and cognitive impacts of AI adoption. While much regulatory attention has focused on bias, accuracy, and data protection, the study suggests that epistemic authority deserves equal consideration. How people interpret and rely on AI systems may determine whether safeguards succeed or fail in practice.

Although the study is based on self-reported attitudes rather than observed behavior, its large sample size and cross-domain design provide a robust foundation for future research. The authors acknowledge that real-world decision environments may amplify or constrain algorithmic authority in ways not fully captured by survey scenarios. They call for longitudinal studies and experimental designs that track how exposure to AI systems shapes decision-making over time.

  • FIRST PUBLISHED IN:
  • Devdiscourse

TRENDING

DevShots

Latest News

OPINION / BLOG / INTERVIEW

The rise of ‘Algority’: How people are letting AI decide for them

China’s banking sector reveals what AI can do for global finance

Why the shift from IoT to AIoT matters for food security in low-income countries

Rational but wrong: How AI misinterprets choices and quietly skews decisions

Connect us on

LinkedIn Quora Youtube RSS
Give Feedback