AI as informer, guardian and persuader: How generative systems influence beliefs
Is generative artificial intelligence (genAI) a weapon of mass deception or a frontline defense against misinformation? A new academic assessment lays out a detailed roadmap of how large language models (LLMs) and related systems could either stabilize or destabilize democratic discourse.
In a study titled The Seven Roles of Generative AI: Potential & Pitfalls in Combatting Misinformation, published in Behavioral Science & Policy, researchers examine how generative AI systems operate across seven distinct roles in the information environment. Using a strengths, weaknesses, opportunities and threats framework, the researchers present one of the most structured analyses to date of how generative AI intersects with misinformation detection, amplification and correction.
Informer, guardian and persuader: Power and peril at scale
The first three roles outlined in the study focus on how generative AI interacts directly with information production, verification and persuasion.
As an informer, generative AI functions as an on demand explainer and synthesizer. It retrieves information across languages, summarizes large bodies of content and tailors explanations to different audiences. These capabilities make it highly efficient in multilingual search, translation and contextual explanation. In theory, such tools can improve access to credible information and lower barriers to knowledge across geographic and linguistic divides.
However, the same systems are vulnerable to hallucinations, bias embedded in training data and a lack of transparent source verification. Generative AI can produce fluent but factually inaccurate responses, struggle to assess the credibility of sources and fail to correct its own mistakes. The authors warn that these weaknesses are not marginal issues. They create risks of amplifying false narratives, especially in areas where reliable data are scarce.
Another structural concern involves synthetic data feedback loops. As AI generated content increasingly populates online spaces, future models may be trained on outputs from earlier systems. This recursive training dynamic risks homogenizing knowledge and degrading accuracy over time. Control over these systems also confers significant power over information flows, raising concerns about commercial and political influence.
The second role, guardian, focuses on AI's capacity to detect and flag suspicious content. Generative AI and related models can monitor online platforms in real time, classify claims, assist fact checking pipelines and help identify manipulated media. Because many models are language agnostic, they can scale misinformation detection across borders and domains.
However, AI systems remain limited by the quality and breadth of their training data. They struggle with nuance, sarcasm and context dependent reasoning. They may misclassify accurate content as false or fail to recognize emerging tactics in disinformation campaigns. Performance disparities across languages also risk creating unequal protection against misinformation, particularly in underresourced regions.
The authors warn against blind reliance on automated fact checking. Overconfidence in AI guardianship may weaken human critical analysis and create a false sense of security. Moreover, institutions could misuse AI powered verification tools to advance political or economic interests, blurring the line between legitimate moderation and strategic control.
The third role, persuader, may be the most politically sensitive. Generative AI can engage users in dialogue based exchanges designed to correct misperceptions. Research cited in the paper suggests AI systems can produce persuasive messages at scale, counter conspiracy beliefs and reinforce perceptions of scientific consensus. Their ability to personalize content increases their persuasive reach.
But the persuasive power of AI also carries systemic risks. Microtargeted messaging in elections or public health campaigns could be deployed for manipulative ends. Large scale content generation may flood information environments with low quality or misleading narratives. Individuals with lower AI literacy may be particularly susceptible to persuasive attacks, especially when content appears authoritative and coherent.
The authors argue that regulating microtargeting in sensitive domains is essential. Source transparency alone is unlikely to be sufficient. Effective oversight must address how persuasive systems are deployed, who controls them and which populations are targeted.
Integrator and collaborator: Shaping democratic deliberation
Apart from content creation and correction, generative AI is increasingly positioned as a mediator in public discourse. As an integrator, AI synthesizes diverse viewpoints and produces structured summaries intended to facilitate debate. In polarized environments, such systems could help identify common ground, map areas of disagreement and generate balanced briefs for citizen assemblies or online forums. Compared with human mediators, AI can rapidly integrate large volumes of data across languages and modalities.
The study highlights the potential of AI-generated summaries to support democratic deliberation. By clarifying competing arguments and structuring discussions, integrator systems could improve decision-making processes and reduce misinterpretation.
However, integration algorithms are not neutral. Biases in training data, model design and fine tuning can skew which perspectives are emphasized. Malicious actors could manipulate systems to produce preordained outcomes or suppress dissenting voices. Aggregated summaries may exert normative pressure on individuals to conform to perceived consensus, weakening minority viewpoints.
The authors stress that AI should not replace human mediators in democratic processes. Instead, regulatory oversight and human in the loop collaboration are needed to ensure minority perspectives are represented and that consensus building does not become conformity enforcement.
Closely related is the collaborator role, in which generative AI acts as a copilot in inquiry and research. It can help users formulate questions, organize sources, outline analyses and reflect on evidence quality. In academic and investigative contexts, such tools may enhance efficiency and broaden access to diverse information streams.
Yet the researchers warn of a growing illusion of understanding. AI systems often operate as black boxes, providing well structured responses without revealing their reasoning processes. This opacity can lead users to overestimate reliability and accept outputs without scrutiny. Overreliance may erode critical thinking skills and reduce cognitive effort, especially among users with limited domain knowledge.
The collaborator role also raises concerns about confirmation bias. AI systems may inadvertently feed users content aligned with their preexisting beliefs rather than challenge them. For individuals unfamiliar with misinformation detection strategies, evaluating AI generated outputs can create cognitive overload rather than clarity.
The study calls for AI literacy programs that emphasize vigilance, reflective use and awareness of system limitations. Generative AI should function as a copilot rather than a truth arbiter, with human judgment remaining central to final decisions.
Teacher and playmaker: Education, games and long term resilience
The final two roles move into education and skill development, areas where generative AI may influence how future generations process information.
As a teacher, generative AI can provide personalized feedback, critique drafts and guide students through information evaluation exercises. Its scalability allows for rapid and individualized responses, potentially expanding access to educational support and media literacy training. AI driven tutoring systems could help students practice fact checking strategies and refine reasoning skills.
However, the evidence on educational impact remains mixed and limited. AI lacks contextual awareness, soft skills and ethical reasoning. It may misinterpret satire or complex arguments. There is also a risk of fostering passive learning, where students rely on AI generated answers rather than engaging deeply with tasks. Over time, this pattern could contribute to cognitive offloading and diminished critical thinking capacity.
The authors advocate for a human-centered approach in schools, with educators maintaining oversight. AI literacy must be embedded in curricula, teaching students how to critically assess AI outputs, recognize bias and use prompts effectively. The study also flags concerns about emotional profiling, manipulation and social scoring in educational settings, arguing that such practices should be prohibited.
The seventh role, playmaker, explores AI's use in adaptive, game based learning environments. Serious games have shown promise in building resilience against misinformation by allowing users to practice detecting misleading techniques. Generative AI can automate aspects of game development, including narrative design and scenario generation, making such tools more accessible.
But AI-generated game content may lack depth, creativity and consistency. Systems can reinforce stereotypes, introduce bias and raise unresolved copyright issues. Ethical concerns also include the displacement of human designers and the potential manipulation of players through AI driven monetization strategies.
To ensure effectiveness, the researchers argue that AI assisted game design must prioritize fairness, accuracy and inclusivity. Human oversight remains essential to maintain ethical standards and avoid embedding systemic bias into learning tools.
Across all seven roles, the study identifies recurring structural risks: hallucinations, bias, opacity, manipulation potential and overreliance. It also highlights a broader governance challenge. Generative AI systems are shaped by the incentives of developers and users. Commercial pressures may prioritize efficiency and novelty over safety and moderation. Users may seek convenience and speed without fully appreciating risks.
The authors note that research at the intersection of AI, misinformation and human behavior remains in its early stages. Many existing studies rely on small samples, short term assessments or self reported measures. Long term validation and randomized controlled trials are needed to understand sustained impacts.
- FIRST PUBLISHED IN:
- Devdiscourse
ALSO READ
-
Haryana Govt. Accelerates Development Amid Opposition Misinformation Claims
-
BJP Alleges Misinformation by Congress on Himachal's Revenue Deficit
-
Provenance AI Appoints New CEO Amid Global Misinformation Challenge
-
CBSE Board Exams Commence with Vigilance Against Misinformation
-
Misinformation Sparks Mob Attacks in Jharkhand: A Tragic Tale