Generative AI needs social ownership, not corporate monopoly

The study finds that despite an explosion of public concern about AI’s social, economic, and ethical implications, governments have not succeeded in creating genuine two-way communication with their citizens.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 29-10-2025 18:29 IST | Created: 29-10-2025 18:29 IST
Generative AI needs social ownership, not corporate monopoly
Representative Image. Credit: ChatGPT

Policymakers around the world are inviting public feedback on artificial intelligence (AI), but few are truly listening, according to a new study titled "Lost in Translation: Policymakers Are Not Really Listening to Citizen Concerns About AI", published as part of the NSF–NIST Institute for Trustworthy AI series.

The research, conducted under the Centre for International Governance Innovation (CIGI), offers a revealing comparative analysis of how three democratic governments, the United States, Australia, and Colombia, attempted to involve their citizens in AI governance.

Public consultations that failed to connect

The study finds that despite an explosion of public concern about AI's social, economic, and ethical implications, governments have not succeeded in creating genuine two-way communication with their citizens. The authors analyzed three major consultation processes: the U.S. National Telecommunications and Information Administration's (NTIA) AI Open Model Weights Request for Comment, Colombia's Ministry of Science and Technology's AI ethics roadmap consultation, and Australia's Department of Industry, Science and Resources' "Supporting Responsible AI" discussion paper.

While each government formally invited citizens to comment on AI risks, participation levels were strikingly low, 510 submissions in Australia, 326 in the U.S., and only 73 in Colombia. These represent far less than one percent of each population. The study found that none of the three governments used broad marketing strategies or inclusive outreach to ensure diverse participation. Even more concerning, there was little evidence that policymakers seriously incorporated the public's feedback into final policy frameworks.

The authors argue that this weak engagement has created a "trust deficit" between citizens and governments, a missed opportunity to foster legitimacy in AI governance. Their findings show that governments have often treated public consultation as a procedural obligation rather than a participatory exercise that could enrich policy outcomes and democratize technology oversight.

AI literacy and the democratic deficit

The research draws a link between limited AI literacy and the failures of public engagement. According to the authors, many citizens are capable of understanding AI's opportunities and risks but are rarely given the tools or context to contribute meaningfully. Citing surveys by the Schwartz Reisman Institute and IPSOS, the study notes that while 73% of people across 21 countries say they understand what AI is, only 21% trust tech companies to self-regulate.

This paradox highlights a broader issue: public skepticism toward both governments and private actors. Despite strong opinions, the public often lacks accessible background materials or structured avenues to provide informed feedback. The authors emphasize that policymakers must invest in public education and outreach to bridge this knowledge gap.

The authors also note that government officials themselves struggle to keep pace with AI's rapid evolution. Many rely heavily on experts or corporate advisors who may have vested interests in shaping regulation. This dependence, they warn, perpetuates a policymaking ecosystem that privileges industry perspectives over citizen voices.

Their methodology, rooted in the International Association for Public Participation (IAP2) framework, evaluated how governments "inform, consult, involve, collaborate, and empower" the public. The analysis revealed that all three countries scored lowest in empowerment and collaboration, with consultation rarely moving beyond passive feedback collection.

A blueprint for genuine AI governance reform

The authors do not merely critique; they propose reform. The paper outlines eight recommendations to modernize participatory AI policymaking in democratic systems. These include supporting AI literacy programs, broadening outreach through digital campaigns, establishing regular online town halls, and creating transparent feedback loops where governments clearly demonstrate how citizen input shapes policy outcomes.

They argue that such reforms are essential not just for better governance but for strengthening democratic trust in the face of technological upheaval. If ignored, public alienation could deepen, especially as AI becomes increasingly intertwined with daily life, employment, and civil rights.

The authors point to emerging models of AI-assisted consultation that could help governments process large-scale public input more effectively. Tools powered by large language models (LLMs) could summarize citizen submissions, detect common themes, and assist policymakers in making data-driven interpretations of public sentiment. However, the study warns that such tools must operate under strict ethical guardrails to ensure they amplify, not distort, the democratic process.

The authors describe the current global landscape as one where citizen feedback "gets lost in translation", often buried within bureaucratic processes or sidelined by dominant corporate narratives.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback