Public trust in AI falls after ChatGPT boom
The survey results reveal that the generative AI boom not only reduced overall acceptance but also amplified existing divides among population groups. Educational level emerged as a key determinant: individuals with higher education remained relatively open to AI adoption, whereas those with less education became increasingly cautious. This divergence widened the education gap in acceptance across all seven decision contexts measured.
Public enthusiasm for artificial intelligence (AI) has dimmed sharply following the global emergence of generative AI tools like ChatGPT, according to a new two-wave national survey study submitted on arXiv. The research, titled "Reduced AI Acceptance After the Generative AI Boom: Evidence From a Two-Wave Survey Study," provides the first longitudinal evidence of changing public attitudes after the initial fascination with generative AI.
The study findings reveal a paradox at the heart of the AI revolution: technological progress is accelerating, but social acceptance is slowing down.
Public confidence in AI drops as human oversight gains support
The study analyzed representative survey data collected from Swiss residents in early 2022, before generative AI gained widespread attention, and again in mid-2023, when public awareness of AI had reached unprecedented levels. By that point, nearly every respondent (96 percent) had heard of ChatGPT, and about half had used it at least once. Yet despite soaring familiarity, overall acceptance of AI technologies declined across nearly all areas of decision-making.
The authors report that public preference for human control increased significantly during this period. Respondents expressed stronger opposition to autonomous AI decision-making in sensitive domains such as health, hiring, and criminal justice. Medical diagnosis, therapy termination, and parole decisions were the least accepted contexts for AI involvement, reflecting deep concerns about accountability and fairness.
While AI acceptance remained higher in less consequential areas such as fake-news detection or document sorting, the broader trend shows growing skepticism. The authors attribute this decline to heightened media coverage, public debate, and exposure to the failures and ethical controversies surrounding generative AI systems since 2022.
Widening inequalities in AI acceptance
The survey results reveal that the generative AI boom not only reduced overall acceptance but also amplified existing divides among population groups. Educational level emerged as a key determinant: individuals with higher education remained relatively open to AI adoption, whereas those with less education became increasingly cautious. This divergence widened the education gap in acceptance across all seven decision contexts measured.
The findings also highlight distinct language-region and gender differences. Respondents from the French-speaking part of Switzerland showed lower acceptance than those from the German-speaking region, particularly for AI applications in healthcare. Meanwhile, gender differences intensified, women's acceptance declined across five of seven decision scenarios, while men's attitudes shifted mainly in financial contexts such as loan decisions.
Age and digital proficiency played more modest roles, but the study found that familiarity with digital technologies alone did not guarantee higher acceptance. This suggests that trust in AI depends less on exposure and more on perceived risk, benefit, and ethical alignment.
The authors interpret these widening disparities as indicators of social polarization in AI perception, warning that if left unaddressed, unequal trust could hinder effective technology integration and undermine public support for future AI policies.
Implications for regulation and governance
The research offers important lessons for policymakers navigating the tension between technological innovation and societal trust. As public skepticism grows, the study underscores a clear demand for human oversight in high-stakes decision-making. Rather than treating AI as a standalone decision-maker, the authors recommend designing systems where humans remain accountable and involved, especially in domains affecting personal welfare, justice, and healthcare.
They propose context-specific regulation instead of uniform, one-size-fits-all rules. In areas where decisions carry serious ethical or emotional consequences, such as medical or legal contexts, maintaining human authority should be nonnegotiable. Conversely, more autonomy may be acceptable in administrative or analytical functions with limited social impact.
The authors further focus on participatory governance, actively involving citizens, experts, and affected groups in shaping AI policy. Such engagement could help rebuild trust, clarify expectations, and ensure that regulation evolves alongside rapid technological change. They suggest participatory frameworks and regulatory sandboxes, where real-world testing and feedback can guide responsible deployment.
Addressing inequality is another policy priority highlighted by the study. Since acceptance fell more sharply among less educated, French-speaking, and female respondents, policymakers are urged to target these groups in communication and education efforts. Building inclusive trust in AI, the authors argue, requires acknowledging differing perceptions of risk and control across social lines.
- FIRST PUBLISHED IN:
- Devdiscourse