Human managers and AI deliver criticism very differently in workplace


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 13-02-2026 18:42 IST | Created: 13-02-2026 18:42 IST
Human managers and AI deliver criticism very differently in workplace
Representative Image. Credit: ChatGPT

AI-powered performance evaluations are now changing the way employees are corrected, disciplined, and monitored at work. What was once a private conversation with a manager is now often an automated assessment delivered without dialogue. This shift is changing not only how feedback is given, but how it is emotionally processed.

According to a new study Shame or Anger? The Impact of Negative Performance Feedback Sources (AI Versus Leader) on Employees' Job Crafting, published in Behavioral Sciences, feedback from AI systems produces a markedly different reaction than feedback from human leaders. It shows that machine-delivered criticism fuels anger and defensive work behavior, while leader criticism more often leads to shame-driven self-improvement.

Leaders and algorithms trigger different emotions

Based on Affective Events Theory, which explains how workplace events generate emotional reactions that influence behavior, the authors argue that negative performance feedback is not a neutral signal. Instead, it is an emotionally charged event whose impact depends heavily on who or what delivers the message.

The research is based on two complementary studies. The first used a controlled scenario-based experiment to isolate emotional responses to identical negative feedback delivered either by a human leader or by an AI system. The second involved a large-scale survey of full-time employees across nine enterprises in China, capturing real-world experiences with both forms of feedback.

Across both studies, a clear pattern emerged. When negative feedback came from a human leader, employees were more likely to experience shame. This emotion is closely tied to social evaluation and self-worth, reflecting concerns about how one is seen by others within the organization. The presence of a leader as the evaluator amplified the sense of personal failure, even when the feedback focused on task performance rather than character.

On the other hand, when negative feedback was generated by an AI system, employees reported higher levels of anger. Rather than internalizing the criticism, workers were more likely to direct frustration outward, perceiving the evaluation as rigid, unfair, or insensitive to context. The absence of interpersonal cues, explanation, or opportunity for dialogue contributed to feelings of resentment rather than self-reflection.

These findings challenge the assumption that removing human judgment from performance evaluation automatically reduces emotional strain. Instead, the research suggests that AI systems shift the emotional burden from self-directed shame to externally directed anger, with distinct consequences for how employees respond.

How emotions reshape work behavior

The study connects these emotional reactions to job crafting, a concept that refers to how employees proactively alter their tasks, relationships, and cognitive framing of work. Job crafting is widely seen as a key mechanism through which employees adapt to demands and maintain engagement, especially in dynamic or high-pressure environments.

The authors distinguish between two forms of job crafting. Promotion-oriented job crafting involves expanding responsibilities, seeking new challenges, and acquiring additional resources to improve performance. Prevention-oriented job crafting focuses on reducing demands, avoiding risk, and minimizing exposure to stress or scrutiny.

The research shows that shame and anger channel employees into different paths. Shame, most commonly triggered by leader-delivered negative feedback, was associated with promotion-oriented job crafting. Employees experiencing shame were more likely to respond by trying to improve themselves, take initiative, and restore their standing through constructive effort. In this sense, shame acted as a motivational force that, under certain conditions, pushed employees toward growth.

Anger, on the other hand, was strongly linked to prevention-oriented job crafting. Employees who felt anger in response to AI-generated negative feedback tended to protect themselves by pulling back. This included avoiding demanding tasks, limiting engagement with evaluative processes, and restructuring work to reduce the risk of further negative judgments. Rather than driving improvement, anger encouraged defensive adaptation.

This divergence matters because it suggests that the same corrective message can produce opposite behavioral outcomes depending on the feedback source. While both leader and AI feedback aim to correct performance, one tends to mobilize effort and development, while the other may quietly erode engagement and ambition.

Trust and algorithm aversion shape reactions

The study also highlights the importance of individual and relational factors in shaping emotional responses. Two moderating variables play a central role: trust in leaders and algorithm aversion.

Leader trust refers to employees' belief in their manager's competence, integrity, and goodwill. The research shows that high levels of leader trust weaken the relationship between negative feedback and shame. When employees trust their leaders, they are more likely to interpret criticism as constructive rather than punitive, reducing the emotional sting without eliminating its motivational value. In these cases, negative feedback is less likely to damage self-worth and more likely to be translated into productive action.

Algorithm aversion, by contrast, intensifies anger responses to AI-generated feedback. Employees who are skeptical of algorithmic decision-making are more likely to perceive AI evaluations as illegitimate or unfair, especially when systems lack transparency or contextual sensitivity. For these individuals, negative feedback from AI systems amplifies frustration and defensive behavior, reinforcing prevention-oriented job crafting.

Together, these findings suggest that employee reactions to performance feedback are not uniform. Organizational culture, leadership quality, and attitudes toward technology all shape how feedback is received and acted upon.

Implications for algorithmic management

The study raises critical questions for organizations increasingly reliant on AI-driven performance management. While algorithmic systems promise consistency and scalability, they may also undermine the very behaviors companies seek to promote.

If AI-generated negative feedback systematically triggers anger and defensive job crafting, organizations risk creating environments where employees comply superficially while disengaging emotionally. Over time, this could reduce innovation, learning, and discretionary effort, especially in roles that require creativity or collaboration.

The findings also complicate the narrative that AI feedback is inherently fairer or less biased than human judgment. Even when algorithms apply standardized rules, employees may still perceive them as unjust if they cannot understand, question, or contextualize the evaluation. Emotional responses, not just procedural features, shape perceptions of fairness.

For leaders, the research underscores the continued importance of human involvement in performance management. While AI can support data collection and analysis, the delivery of negative feedback remains a socially sensitive act. Human leaders, particularly those who have built trust, are better positioned to frame criticism in ways that encourage growth rather than withdrawal.

Rethinking feedback design

The study notes that organizations should move away from a binary choice between human and AI feedback and instead design hybrid systems that balance efficiency with emotional intelligence. This includes clarifying the role of AI as a support tool rather than an unquestionable authority, increasing transparency around evaluation criteria, and providing channels for explanation and dialogue.

Training leaders to interpret and communicate AI-generated insights may also help mitigate negative emotional reactions. When employees see that a trusted leader stands behind the feedback and can contextualize it, the likelihood of constructive response increases.

Organizations must address algorithm aversion directly. This involves educating employees about how AI systems work, what data they use, and where their limitations lie. Without this understanding, AI feedback risks being seen as arbitrary or hostile, regardless of its technical sophistication.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback