Who gets the job, the loan, or the benefit? AI now holds the power


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 18-02-2026 10:21 IST | Created: 18-02-2026 10:21 IST
Who gets the job, the loan, or the benefit? AI now holds the power
Representative Image. Credit: ChatGPT

Who gets the job interview. Who receives public benefits. Who is flagged as high risk. Increasingly, these outcomes are shaped not by human deliberation but by algorithmic systems embedded deep within institutional routines. According to a new study, this shift is redefining how societies understand fairness, accountability, and recognition.

In AI in Everyday Life: How Algorithmic Systems Shape Social Relations, Opportunity, and Public Trust, published in the journal Societies, the authors examine how artificial intelligence systems restructure institutional authority. The study notes that AI-driven classification and predictive decision-making are altering opportunity pathways and reshaping public trust across multiple domains.

Delegated authority and the rise of algorithmic power

Institutions are rapidly transferring decision-making power to algorithmic systems. Automated tools screen job applicants, rank university candidates, flag welfare risks, and assess creditworthiness. These outputs are often treated as binding outcomes rather than advisory suggestions.

The authors reveal that this delegation represents a structural shift in institutional authority. Historically, administrative decisions involved deliberation, discretion, and direct human accountability. With algorithmic systems, authority becomes embedded in computational models trained on historical data and optimized for predictive performance.

This transformation changes the nature of institutional judgment. Decisions become statistical rather than relational. Individuals are evaluated based on predictive probabilities, behavioral patterns, and data profiles rather than contextual narratives. Once integrated into workflows, these systems operate at scale, standardizing assessments across thousands or millions of cases.

Algorithmic authority is not neutral. It reflects the objectives, data structures, and optimization criteria embedded within system design. When institutions adopt automated systems, they also adopt the logic encoded within them.

Responsibility becomes more diffuse. If a hiring algorithm filters out candidates or a welfare system flags individuals as high risk, accountability may be distributed across developers, data providers, managers, and regulators. This diffusion complicates contestation and redress.

The authors warn that as institutions increasingly rely on algorithmic outputs, the traditional foundations of legitimacy rooted in human deliberation and procedural engagement may weaken.

Classification, social sorting, and emerging inequalities

The study identifies classification as the key mechanism through which algorithmic systems reshape social relations. Automated systems sort individuals into categories such as suitable or unsuitable, high risk or low risk, eligible or ineligible. These categories determine access to jobs, benefits, credit, and visibility.

Classification is not merely descriptive. It is performative. Once an individual is labeled within a system, that classification influences opportunities and future interactions. For example, predictive models trained on historical employment data may prioritize certain profiles while excluding others. Risk scoring systems in welfare administration may shape how applicants are treated or monitored.

The authors argue that algorithmic classification generates a dual pathway of inequality.

The first pathway is amplification. Because AI systems rely on historical data, they can reproduce and stabilize existing disparities. If past hiring patterns favored certain demographics, predictive systems trained on those patterns may continue to do so. If law enforcement data reflects uneven surveillance, predictive policing models may reinforce those imbalances.

The second pathway is novelty. Algorithmic systems introduce new forms of abstraction that create categories not previously present in institutional practice. Labels such as predicted non-compliance or engagement risk emerge from model outputs rather than from established legal or social frameworks. These new classifications can influence how institutions allocate resources and exercise oversight.

This dynamic extends beyond material outcomes. The study introduces the concept of datafied citizenship to describe how individuals increasingly encounter institutions through algorithmic profiles. Institutional standing is mediated by data traces, engagement metrics, and predictive scores. Recognition shifts from relational evaluation to data-driven categorization.

Such transformations have psychological and social consequences. When individuals receive automated rejections or experience unexplained ranking changes on digital platforms, they may interpret these outcomes as reflections of their competence or worth. Algorithmic classification thus shapes both opportunity structures and identity formation.

Opacity, trust, and the future of institutional legitimacy

Many algorithmic systems operate with limited intelligibility for affected individuals. Decisions may be delivered without clear explanations of how they were reached. Even when technical documentation exists, it may not align with human expectations of justification.

The authors argue that procedural opacity undermines trust. Legitimacy in institutional settings depends not only on outcome accuracy but on perceived fairness and recognition. When individuals cannot understand or challenge decisions, confidence in institutions may erode.

Transparency alone, as the study stresses, is insufficient. Providing technical details about model architecture does not necessarily translate into meaningful comprehension. Legitimacy requires intelligibility, responsiveness, and opportunities for contestation.

In this context, algorithmic governance must move beyond narrow bias mitigation strategies. While fairness interventions are important, they address only part of the problem. The study calls for governance frameworks grounded in four principles: intelligibility, accountability, inclusion, and trust-building.

Intelligibility requires explanations that resonate with everyday reasoning rather than purely technical descriptions. Accountability demands clear institutional responsibility for automated outcomes. Inclusion involves engaging affected communities in system design and oversight. Trust-building depends on maintaining channels for dialogue and redress.

The authors argue that algorithmic systems should not displace human judgment entirely. Instead, institutions must design socio-technical systems that preserve recognition and relational engagement even as automation increases.

Cross-domain institutional transformation

The patterns identified are not confined to a single sector. Similar dynamics appear across employment, welfare administration, digital platforms, finance, education, and urban governance.

In employment, automated screening tools determine which applicants receive interviews. In welfare systems, predictive analytics identify recipients deemed likely to require monitoring. On social media platforms, ranking algorithms determine whose voices are amplified and whose are marginalized. In finance, credit scoring systems influence access to loans and insurance.

Across these domains, automation standardizes classification and accelerates decision-making. Yet it also reduces opportunities for relational interaction. Institutional encounters become mediated through interfaces and scores rather than dialogue.

The study suggests that society is entering an era in which algorithmic systems function as part of the infrastructure of governance. They shape how institutions know individuals and how individuals experience institutional authority.

If legitimacy depends on recognition and fairness, and recognition becomes data-driven rather than relational, institutions must reconsider how they maintain public trust.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback