Human vs. machine: Explanations narrow fairness gap in AI recruitment decisions

Organizations increasingly adopt AI tools for resume screening, skill matching, and initial interviews to reduce bias and improve efficiency. Yet, job applicants often remain skeptical or uncomfortable with algorithmic decision-making, perceiving it as impersonal and potentially biased.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 07-11-2025 23:43 IST | Created: 07-11-2025 23:43 IST
Human vs. machine: Explanations narrow fairness gap in AI recruitment decisions
Representative Image. Credit: ChatGPT

A new study warns that perceptions of fairness remain a major barrier to the acceptance of artificial intelligence in hiring processes across industries. A team of researchers from the University of Graz and other European institutions has found that job applicants still judge AI-driven hiring decisions as less fair than those made by humans, unless clear explanations are provided.

The research, titled "Rejected by an AI? Comparing Job Applicants' Fairness Perceptions of Artificial Intelligence and Humans in Personnel Selection" and published in Frontiers in Artificial Intelligence, explores how candidates interpret fairness, trust, and accountability in algorithm-based recruitment systems. The findings highlight a critical gap between technological efficiency and human psychology that could shape the future of AI deployment in human resources.

Explanations can repair fairness in AI-based hiring

Can transparency reduce skepticism toward AI-driven recruitment? To answer this, the authors used a controlled online vignette experiment with 921 participants and tested how applicants respond to rejection when decisions are made either by a human recruiter or an AI system, with and without an accompanying explanation.

Participants rated their experiences across four dimensions of fairness: outcome fairness, procedural fairness, interpersonal treatment, and intent to recommend the organization. The results showed a consistent pattern, providing explanations significantly improved fairness perceptions regardless of whether a human or an AI made the final decision.

While AI-based decisions were generally perceived as colder and less empathetic, explanatory feedback helped bridge that emotional gap, increasing understanding and acceptance of rejection outcomes. The study found that clear reasoning behind the decision, why an applicant was not selected, reduced algorithm aversion, a psychological bias that often leads people to distrust machine judgments.

This suggests that transparency is not merely a compliance feature but a core determinant of user trust in automated selection systems. The authors conclude that when candidates know how AI evaluates them, they are more likely to view the process as fair and legitimate.

AI efficiency meets human skepticism

The research was inspired by a paradox in modern recruitment. Organizations increasingly adopt AI tools for resume screening, skill matching, and initial interviews to reduce bias and improve efficiency. Yet, job applicants often remain skeptical or uncomfortable with algorithmic decision-making, perceiving it as impersonal and potentially biased.

The authors note that even if AI can theoretically reduce discrimination, its opacity undermines user confidence. Applicants tend to trust human recruiters more because they associate them with empathy, discretion, and accountability.

Interestingly, the study also found that the absence of explanations amplified negative reactions to AI-based rejections, even when outcomes were similar to human decisions. This demonstrates that fairness is not purely an objective quality, it is deeply influenced by how transparent and communicative the decision-making process appears to the individual.

According to the researchers, AI systems are judged not only by their accuracy but also by their social behavior, including how they communicate and justify decisions. For organizations, this means that algorithmic recruitment tools must be designed with human-centered feedback mechanisms that preserve a sense of procedural justice.

Implications for employers and HR technology

The authors argue that HR professionals must balance efficiency with ethical communication. Implementing AI in recruitment requires not only technical calibration but also the creation of explanation interfaces, structured feedback systems that inform applicants of the rationale behind hiring outcomes.

These explanations can take various forms, such as brief rationales, performance-based insights, or data-driven feedback summaries that articulate how candidates were evaluated relative to the position's requirements. Such tools not only enhance fairness perception but may also strengthen employer branding by demonstrating transparency and respect.

Moreover, the study stresses the importance of ethics-by-design principles in AI-driven selection platforms. Developers should embed fairness and accountability into algorithms from the outset, ensuring that systems do not merely replicate biases or automate rejection without meaningful justification.

Ethical considerations are especially critical given the expanding role of AI in hiring, from screening resumes and analyzing video interviews to predicting cultural fit. Without clear communication protocols, even accurate models can generate distrust and harm organizational reputation.

Toward a human-aware AI in recruitment

The researchers recommend a hybrid recruitment model in which AI assists but does not replace human decision-making. Human oversight ensures that machine assessments are contextualized, empathetic, and ethically interpreted. This approach aligns with the emerging paradigm of human-AI collaboration, where technology augments rather than undermines human judgment.

The findings also invite policymakers and regulators to consider transparency requirements for AI hiring tools, ensuring that applicants' rights to explanation and due process are respected. Such policies could prevent opaque algorithmic practices and reinforce the principle that fairness must be both measurable and perceivable.

In the authors' view, fairness perception is not an optional feature, it is the ethical infrastructure that sustains public trust in artificial intelligence.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback