From exclusion to inclusion: AI redefines how citizens engage in public decision-making

From exclusion to inclusion: AI redefines how citizens engage in public decision-making
Representative image. Credit: ChatGPT

Artificial intelligence (AI) is emerging as a powerful tool to reshape civic participation, but new research shows that without careful design, it risks deepening the very inequalities it seeks to address. A multidisciplinary team finds that AI-driven civic engagement platforms can empower marginalized populations only if accessibility, safety, and transparency are treated as core design principles rather than afterthoughts.

The study, titled "Inclusive AI-Enhanced Civic Engagement: Empowering Marginalized Voices," published in Societies, investigates how AI can be integrated into digital participation platforms to better include vulnerable groups such as older adults, people with disabilities, and individuals with limited digital literacy.

Based on two complementary studies involving citizens from Romania and Slovakia alongside AI and ethics experts, the research presents a detailed roadmap for designing inclusive civic technologies. It highlights a central tension: while AI can expand participation through accessibility tools and automation, it can also introduce new barriers through bias, opacity, and complexity.

Accessibility and safety emerge as core drivers of AI civic engagement

The research shows that for marginalized users, the success of AI-powered civic platforms depends less on advanced features and more on basic usability, trust, and protection. Participants consistently prioritized tools that reduce barriers and ensure a safe digital environment over those designed for engagement or entertainment.

Citizens involved in the study, many of whom faced overlapping vulnerabilities such as low income, limited digital skills, or social marginalization, emphasized the need for features that make participation both understandable and secure. AI-enabled translation tools, simplified language functions, and speech-to-text capabilities were identified as critical enablers, helping users overcome linguistic and cognitive barriers that often prevent meaningful engagement.

Equally important were safety-oriented features. Participants strongly supported the inclusion of systems that detect toxic content, filter spam, and protect against phishing attempts. These safeguards were not viewed as optional enhancements but as essential prerequisites for participation. The findings suggest that fear of harassment, fraud, or misuse remains a major deterrent for vulnerable users, particularly in contexts where digital infrastructure and trust in institutions are limited.

The study also reveals a clear preference for control over automated systems. Users favored tools that allow them to actively navigate content, such as search bars and filtering options, rather than relying on algorithmic recommendations or automated categorization. This reflects a broader skepticism toward passive AI systems that shape user experiences without clear user input or understanding.

Interestingly, features commonly associated with commercial social media platforms, including gamification and deep integration with external networks, were among the least favored. Participants viewed these elements as distractions rather than facilitators of civic engagement, underscoring a fundamental difference between civic platforms and entertainment-driven digital ecosystems.

Bridging the gap between citizen needs and technical design

While citizen preferences provide a clear picture of desired features, the study finds that translating these needs into functional and ethical systems requires significant technical and governance considerations. This gap between user expectations and implementation realities is where AI experts play a critical role.

In the second phase of the research, interdisciplinary expert groups evaluated the feasibility and implications of the features identified by citizens. Their findings point to the need for a structured framework that integrates technical robustness, ethical safeguards, and user education.

Experts stress that users must be clearly informed about how AI systems function, what data they rely on, and what limitations they have. This is particularly important for AI chatbots, which are increasingly used to guide users through civic processes. Without clear explanations, users may overtrust or misunderstand these systems, leading to misinformation or disengagement.

To address this, the study recommends that platforms provide detailed but accessible information about AI tools, including whether a chatbot is rule-based or generative, the sources of its knowledge, and the possibility of inaccurate or inconsistent responses. Users should also be made aware that AI outputs are probabilistic rather than deterministic, meaning that responses may vary even when the same question is asked repeatedly.

In addition to transparency, the research highlights the importance of human oversight. AI systems should not operate in isolation but must include mechanisms for user feedback, error reporting, and human intervention. Features such as collaborative moderation, where users can flag harmful content for review, are seen as essential for maintaining trust and accountability.

Data privacy and fairness also emerge as critical concerns. Experts emphasize the need for unbiased training data to prevent discrimination and recommend approaches such as federated learning to minimize data misuse. These measures are particularly important for vulnerable populations, who may be disproportionately affected by biased or intrusive systems.

Ethical AI and user education become vital to inclusive participation

Technical solutions alone are not sufficient to ensure inclusive civic engagement. Equally important is the need for comprehensive user education and ethical design practices that empower users to interact with AI systems confidently and critically.

A major finding is that many users lack a basic understanding of how AI technologies work. This knowledge gap can lead to unrealistic expectations, overreliance on automated systems, or complete disengagement. To address this, the researchers advocate for integrating user guidance directly into platforms, including tutorials, example queries, and explanations of system behavior.

For instance, users should be informed that summarized content may omit important context, that speech recognition tools may struggle with dialects, and that AI-generated responses may contain errors. By framing AI as an assistive but imperfect tool, platforms can help users develop a balanced understanding that supports effective and responsible use.

The research also warns against the humanization of AI systems, which can blur the line between machine and human agency. Designers are encouraged to maintain a clear distinction, using visual and functional cues that reinforce the artificial nature of these systems. This approach helps preserve critical thinking and prevents users from attributing undue authority or trust to AI outputs.

In regions with low digital literacy and infrastructure, such as the study's focus areas in Eastern Europe, the risks of exclusion are particularly pronounced. Without deliberate efforts to design for inclusivity, digital participation platforms may reinforce existing disparities rather than reduce them.

The findings highlight the importance of involving marginalized users in the design process from the outset. By adopting co-design approaches that incorporate user feedback and lived experiences, developers can create systems that better align with the needs and constraints of diverse populations.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback