AI may be efficient, but public still prefers humans in scarce resource decisions


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 16-02-2026 09:49 IST | Created: 16-02-2026 09:49 IST
AI may be efficient, but public still prefers humans in scarce resource decisions
Representative Image. Credit: ChatGPT

Governments and institutions across the world are turning to artificial intelligence (AI) to distribute scarce resources, but public support may not be keeping pace. A new study finds that people consistently view algorithm-based allocation as morally less desirable than almost any alternative. Whether the resource is a kidney transplant or a kindergarten place, individuals show a marked preference for human or traditional mechanisms over AI systems.

Based on a large experimental survey involving more than 1,400 participants, the study, Resource Allocation by Algorithms: People Prefer Almost Any Alternative, published in the journal AI & Society, examines how people morally evaluate AI-driven distribution of scarce goods compared with allocation by friends, waiting lists, lotteries, or markets.

The results reveal a robust pattern of algorithm aversion that cuts across domains. While policymakers often frame AI allocation systems as objective and efficient, the public appears skeptical of granting machines authority over who receives critical or valuable resources.

Public prefers humans, queues, and markets over algorithms

The authors designed a vignette-based experiment in which participants were randomly assigned to scenarios involving five different goods: a kidney transplant, emergency shelter following a hurricane, a kindergarten place, legal representation, and theater tickets. Each scenario described one of five allocation mechanisms: an AI-based algorithm, a friend making the decision, a waiting list, a lottery, or a market-based process.

Participants were asked to evaluate how morally desirable it was that the resource be distributed using the specified mechanism. Across all goods, algorithmic allocation ranked near the bottom.

Allocation by a friend received the highest moral approval overall, followed closely by waiting lists. Market-based distribution also tended to receive higher moral ratings than algorithmic allocation. Lotteries were rated roughly on par with AI systems but did not outperform human-driven approaches.

The pattern was consistent across domains. Even in high-stakes scenarios such as organ transplantation or disaster relief, respondents did not rate algorithmic distribution as morally superior to human or conventional mechanisms. The finding challenges assumptions that AI's perceived impartiality would automatically translate into moral endorsement.

The authors conclude that when confronted with decisions about scarce resources, people do not instinctively treat algorithms as neutral arbiters. Instead, they appear to associate moral legitimacy more strongly with familiar human or procedural systems.

Transparency and comprehensibility shape moral judgments

The study investigates why algorithm aversion occurs. One of the most important explanatory factors identified is perceived opacity. Participants rated algorithmic allocation as significantly less understandable than other mechanisms. Lower perceived comprehensibility strongly correlated with lower moral approval. When statistical models accounted for perceived opacity, the negative moral evaluation of AI systems was substantially reduced.

This suggests that resistance to algorithmic allocation may stem partly from concerns about transparency and interpretability. If individuals do not understand how decisions are made, they may perceive the process as less fair or less accountable, even if the outcomes are impartial.

Opacity appears to function as a moral liability. Human decision-makers, waiting lists, and markets are familiar and conceptually graspable, even if imperfect. Algorithms, by contrast, are often perceived as black boxes operating beyond ordinary scrutiny.

While algorithmic systems are often promoted as consistent and bias-resistant, public acceptance may hinge less on technical performance and more on perceived clarity and accountability.

Essential goods, moral orientation, and demographic patterns

The researchers also examine whether attitudes toward AI allocation vary depending on the type of good being distributed. They distinguish between essential goods necessary for survival, such as kidneys and emergency shelter, and nonessential goods such as kindergarten places, legal services, and theater tickets.

Algorithm aversion proves stronger in nonessential contexts. When the good is not life-saving, participants are more likely to penalize algorithmic allocation in moral evaluations. In life-or-death scenarios, the moral penalty diminishes but does not disappear. This suggests that in high-stakes contexts, individuals may be somewhat more willing to accept algorithmic systems, possibly because efficiency and impartiality become more salient.

The study also explores individual moral orientations using dimensions from the Oxford Utilitarianism Scale. Participants scoring higher on impartial beneficence, reflecting a willingness to maximize welfare impartially, were more likely to evaluate allocation mechanisms positively overall. However, this orientation did not uniquely increase support for algorithms over other mechanisms.

The instrumental harm dimension, capturing willingness to accept harm for greater good outcomes, did not significantly predict attitudes toward algorithmic allocation. This indicates that aversion is not driven primarily by utilitarian reasoning about outcomes.

Age emerges as a factor, with older respondents tending to rate allocation mechanisms as less morally desirable overall. However, gender, education level, and political orientation do not significantly moderate algorithm aversion in the core analysis. The absence of strong partisan or demographic divides suggests that skepticism toward AI allocation may cut across social categories.

The rise of folk algorithmics

Just as ordinary people hold structured beliefs about markets, economics, and justice that influence political behavior, they also hold systematic beliefs about algorithms.

These beliefs do not necessarily align with expert assessments of efficiency or fairness. Instead, they reflect everyday intuitions about legitimacy, accountability, and moral authority.

The study suggests that people do not view algorithmic allocation as morally neutral simply because it is automated. Moral judgments appear shaped by perceived transparency, familiarity, and the symbolic presence of human agency.

The concept of folk algorithmics implies that public attitudes toward AI governance will be shaped not only by performance metrics but by collective narratives and moral intuitions. Policymakers who deploy algorithmic systems without addressing these perceptions may face legitimacy challenges.

Implications for policy and AI governance

If public perception treats algorithmic allocation as morally inferior to human or procedural alternatives, implementation may encounter resistance even when systems demonstrate efficiency gains.

Enhancing transparency and explainability may mitigate some of this resistance. The strong role of perceived comprehensibility suggests that clearer communication about how algorithms operate could improve moral acceptance.

The results simultaneously warn against assuming that algorithmic neutrality will automatically generate trust. Familiar mechanisms such as waiting lists and markets may be viewed as more legitimate despite known imperfections.

Additionally, moral acceptance is domain-sensitive. Policymakers may find greater tolerance for AI in life-saving contexts, where impartial efficiency is highly valued, than in routine or discretionary allocations.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback