How artificial intelligence can build trust and counter misinformation in crises
The authors argue that misinformation in disaster contexts is not merely a communication problem, it is a systemic risk. It can erode trust in scientific institutions, delay emergency response, and lead to life-threatening decisions by the public.
A multidisciplinary team of researchers have developed a pioneering framework to combat misinformation during disasters, a growing challenge in the artificial intelligence (AI) era that threatens global resilience, public safety, and trust in institutions.
The study, titled "A Toolbox to Deal with Misinformation in Disaster Risk Management" and published in AI & Society, presents an eight-step methodological framework for identifying, analyzing, and mitigating false or misleading information in crisis communication ecosystems.
Built upon the integration of artificial intelligence, communication science, and risk governance, the toolbox serves as a structured guide for policymakers, disaster managers, and researchers tasked with managing information flow during emergencies such as floods, wildfires, pandemics, and earthquakes.
A structured response to the misinformation crisis
The study acknowledges that misinformation has become a major operational and ethical challenge in disaster management. While real-time data and AI models have improved early warning and response systems, the same technologies have also amplified the spread of unverified or manipulative content.
To address this duality, the authors present an eight-step toolbox, a modular framework that can be adapted by national agencies, local authorities, and humanitarian organizations. Each step builds a sequential process from situational understanding to mitigation and ethical management.
The first step calls for defining the communication context using tools such as the PESTEL model (Political, Economic, Social, Technological, Environmental, and Legal factors). This helps stakeholders identify systemic drivers of misinformation and map critical information channels, both traditional and digital.
The second step focuses on detecting misinformation patterns, combining qualitative monitoring with AI-based methods like natural language processing and sentiment analysis. These techniques can identify recurring narratives, emotional triggers, and networked dissemination tactics that distort disaster communication.
The third and fourth steps assess the impact on risk perception and guide the design of targeted countermeasures, respectively. The researchers highlight that misinformation often reshapes how communities perceive vulnerability and trust authorities. Interventions must therefore account for social and psychological dimensions, not just factual corrections.
The fifth and sixth steps move into operational execution, implementing countermeasures such as prebunking, debunking, and digital literacy campaigns, followed by continuous evaluation of effectiveness. The toolbox insists on iterative learning: interventions must adapt to the changing nature of misinformation and evolving digital platforms.
The seventh step introduces ethical and legal compliance, aligning practices with international frameworks like the EU Digital Services Act (DSA), General Data Protection Regulation (GDPR), and the EU AI Act. Ethical governance, according to the authors, is integral to maintaining trust and ensuring that countermeasures do not infringe on rights such as privacy or freedom of expression.
Finally, the eighth step concerns implementation and management, offering operational guidance for integrating the toolbox into real-world policy and institutional workflows.
Bridging AI, ethics, and human trust
The authors argue that misinformation in disaster contexts is not merely a communication problem, it is a systemic risk. It can erode trust in scientific institutions, delay emergency response, and lead to life-threatening decisions by the public.
The study's framework proposes that AI technologies must be part of both the problem analysis and the solution. Artificial intelligence can automate detection, but it must also be governed ethically. The researchers advocate for AI-assisted monitoring systems that flag misinformation trends in real time while respecting data privacy and transparency standards.
Importantly, the toolbox integrates human oversight throughout. AI's computational capacity must be balanced with expert judgment, ensuring that automated decisions remain explainable and accountable. The authors emphasize that "trust cannot be algorithmically manufactured", effective disaster communication relies on human empathy and social credibility as much as technical accuracy.
This approach places the paper at the forefront of an emerging research field that connects AI ethics, risk governance, and crisis communication. By merging analytical rigor with practical application, it offers a realistic and adaptable model for policymakers navigating the information chaos that often accompanies disasters.
Operational and policy implications
The paper provides pragmatic recommendations for national and local authorities seeking to institutionalize misinformation management. These include:
- Establishing interdisciplinary crisis communication teams combining data scientists, social psychologists, and media experts.
- Integrating AI-based monitoring dashboards into emergency operations centers.
- Launching public education programs to increase digital literacy and resilience against false information.
- Setting up ethical review boards to oversee AI and data-driven communication systems.
- Ensuring international policy alignment, particularly under EU regulatory standards and United Nations risk frameworks.
The authors argue that effective risk communication depends on proactive, not reactive information management. Prebunking, anticipating false narratives before they spread, is far more effective than post-crisis corrections.
The paper also highlights the importance of trust-building as a long-term resilience strategy. Misinformation thrives where citizens distrust official sources; thus, consistent transparency, inclusivity, and ethical accountability are crucial.
Incorporating this framework into national disaster management plans could significantly enhance preparedness, coordination, and social cohesion.
- FIRST PUBLISHED IN:
- Devdiscourse