White-collar automation misjudged: Employers plan to adapt, not replace workers
evidence on the capabilities of modern AI systems, including language-based and reasoning models, firms revised their expectations significantly upward. The shift was strongest for occupations dominated by repetitive analytical functions, such as data entry, bookkeeping, and standardized compliance reporting.
A new study by researchers from ZEW – Leibniz Centre for European Economic Research and the University of Mannheim sheds light on how employers perceive the growing influence of artificial intelligence on professional work. The research, titled "Beliefs About Bots: How Employers Plan for AI in White-Collar Work," provides one of the first randomized evidence-based insights into how business leaders adjust their expectations and strategies when exposed to credible information about automation risks.
The paper focuses on the white-collar sector, particularly tax advisory firms in Germany, to examine how information about AI-driven automation affects firms' beliefs, hiring decisions, and future planning.
Employers underestimate automation risks in high-skill jobs
The study challenges a widespread assumption: that white-collar work is insulated from automation. Through a randomized information experiment, the researchers find that employers consistently underestimate the degree to which AI technologies can automate professional, cognitive, and analytical tasks.
Before receiving information, most firms believed that automation risks were limited to routine clerical roles. However, after exposure to expert evidence on the capabilities of modern AI systems, including language-based and reasoning models, firms revised their expectations significantly upward. The shift was strongest for occupations dominated by repetitive analytical functions, such as data entry, bookkeeping, and standardized compliance reporting.
AI's reach into high-skill professions is broader than employers assume, suggesting that even roles requiring judgment, interpretation, or domain-specific expertise may be partially automatable. Yet, while risk perceptions changed, the study finds that this did not translate into immediate changes in hiring behavior, indicating that many firms view AI transformation as a gradual process rather than an imminent disruption.
Information alters expectations, not short-term hiring
The key question driving the experiment was whether updating beliefs about AI automation risks would influence near-term employment strategies. Firms were randomly assigned to receive credible data about the share of tasks susceptible to automation in their industry.
Following the intervention, firms increased their perceived risk of automation but maintained their short-term hiring plans. This suggests that while awareness of AI's potential grew, most employers were not ready to act immediately by reducing headcount or freezing recruitment. Instead, they redirected attention toward restructuring work content and future-proofing employee skills.
Interestingly, the experiment showed that updated beliefs about automation led to rising expectations of productivity and profitability, even as firms anticipated minimal changes in wages. This combination implies limited rent-sharing, meaning that while employers foresee gains from AI adoption, they do not expect these benefits to flow evenly to workers. The researchers interpret this as a possible precursor to widening income inequality within firms, especially between AI-complementary and AI-substitutable roles.
The authors also note that automation awareness stimulated forward-looking adaptation strategies. Many firms reported plans to introduce training in data analysis, AI system management, and digital compliance tools. Others expressed an interest in reassigning routine-heavy staff toward human-AI collaboration tasks. These findings suggest a mindset shift among employers: AI is no longer viewed merely as a threat but as a strategic input requiring organizational learning and redesign.
AI beliefs drive productivity expectations but highlight inequality risks
The study found that changes in beliefs about automation are linked to increased optimism about firm performance. After learning about AI's potential, employers became more confident that technology would raise efficiency and improve financial outcomes.
However, this optimism coexists with selective adaptation. Firms expect higher output with leaner or restructured workforces, but wage expectations remain largely static. This signals a disconnect between technological productivity and employee compensation, a dynamic that could amplify inequality in professional service sectors.
AI's arrival in white-collar work will likely reconfigure job content rather than eliminate entire occupations. Tasks that require human intuition, contextual reasoning, and interpersonal trust are expected to remain valuable. In contrast, repetitive analytical work, even in high-skill domains, will increasingly be automated. This hybridization of roles is already visible in fields like tax consultancy, law, and finance, where AI tools handle data-intensive components, leaving humans to interpret and validate results.
The study notes that employer responses to automation risk depend not only on technology's capabilities but also on belief formation and information accuracy. When firms receive structured, evidence-based insights, rather than sensationalist predictions, they adjust their expectations rationally, preparing for AI integration without overreacting.
Long-term implications for labor markets and policy
The findings carry broader implications for the future of work and labor market policy. If most employers currently underestimate AI's transformative potential, the transition to automation could catch sectors off guard, leading to skill mismatches and adjustment lags.
The study argues that policy interventions should focus on aligning employer beliefs with technological realities. By improving awareness of AI capabilities and their economic implications, governments can encourage early investment in upskilling and job redesign. The researchers also highlight the importance of training programs and continuous learning systems that prepare employees for AI-augmented workflows rather than displacement.
The paper provides new empirical support for the idea that information, not just innovation, shapes how societies adapt to AI. By identifying belief formation as a critical channel, it reframes automation not as a purely technological process but as a cognitive and strategic one.
The authors asserts that AI's rise in professional work will not unfold through sudden job losses but through gradual task substitution, role evolution, and shifts in firm organization. Over time, differences in how firms interpret and act on automation information could produce divergent outcomes across industries, some achieving smooth adaptation, others facing disruption.
- FIRST PUBLISHED IN:
- Devdiscourse