AI becomes double-edged sword in fight against rising fake content online
The rapid rise of artificial intelligence (AI) is intensifying global concerns over disinformation, with new research warning that the same technologies driving innovation are also accelerating the scale, speed, and sophistication of false information across digital platforms. With generative AI tools becoming widely accessible, researchers say the battle between automated deception and digital truth is entering a critical new phase.
A new study, titled "Artificial Intelligence and Disinformation: A State-of-the-Art Review Through a Systematized Literature Review," published in Social Sciences, analyses how the relationship between AI and disinformation has evolved between 2020 and 2025. The research is based on 62 peer-reviewed studies to map the key trends, risks, and emerging strategies shaping this rapidly expanding field.
AI emerges as both engine and weapon in disinformation ecosystem
The study identifies a growing consensus that AI is no longer just a supporting technology in the spread of misinformation but has become a central driver of disinformation ecosystems. Advanced AI systems, particularly generative models and large language tools, are capable of producing large volumes of realistic text, images, audio, and video content with minimal human input.
This shift has fundamentally altered the scale of disinformation campaigns. Automated systems can now generate fake narratives, impersonate public figures, and create convincing but entirely fabricated content at speeds and volumes that far exceed traditional methods. The research highlights that these capabilities are being used across a range of contexts, including political propaganda, commercial manipulation, and coordinated influence operations.
Deepfakes and synthetic media are identified as one of the most visible manifestations of this trend. These technologies enable the creation of highly realistic audiovisual content that can blur the line between truth and fabrication. However, the study notes that the impact of deepfakes extends beyond direct deception. Their widespread presence contributes to a broader erosion of trust, making it increasingly difficult for audiences to distinguish between authentic and manipulated content.
Simpler forms of manipulation, such as memes and low-cost visual edits, continue to play a significant role in spreading disinformation. These forms often achieve greater virality and can be more effective in shaping public opinion, particularly when combined with emotionally charged narratives.
The research situates these developments within a broader transformation of digital communication, where social media platforms, algorithmic recommendation systems, and user-generated content interact to amplify disinformation. In this environment, artificial intelligence does not operate in isolation but is embedded within a complex network of technological and social factors that drive the circulation of false information.
Research field expands rapidly as AI disinformation becomes global concern
The study highlights a sharp increase in academic attention to AI-driven disinformation, particularly following the emergence of generative AI tools in 2022. The number of publications on the topic has grown significantly, accompanied by a diversification of research themes and methodologies.
Analysis of the literature reveals a predominance of qualitative approaches, accounting for more than half of the studies reviewed, with researchers focusing on thematic analysis, case studies, and interpretative frameworks. Quantitative and mixed-method approaches are also present but less common, reflecting the complexity and evolving nature of the subject.
The research identifies five major thematic areas that structure the field. These include AI as a source of disinformation, AI as a tool to combat it, regulatory and ethical frameworks, deepfakes and audiovisual manipulation, and the role of AI in education and media literacy.
This classification reflects the dual nature of AI in the disinformation landscape. On one hand, AI enables the rapid production and dissemination of misleading content. On the other hand, it offers new tools for detection, verification, and mitigation.
Keyword analysis further illustrates the centrality of artificial intelligence within the research field. The term appears as the dominant node in conceptual networks, closely linked to disinformation, fake news, and misinformation. Over time, the focus has shifted from traditional concerns about misinformation to a broader and more complex understanding of AI-driven content manipulation.
The study also reveals the interdisciplinary nature of the field, with contributions spanning communication, social sciences, computer science, and artificial intelligence. Communication emerges as the most prominent discipline, highlighting the central role of media systems in shaping the dynamics of disinformation.
AI tools show promise but struggle against complex disinformation dynamics
While AI is widely seen as a key tool in combating disinformation, the study highlights significant limitations in current approaches. AI systems are increasingly used to detect and classify false content, identify disinformation networks, and support fact-checking processes. These tools rely on techniques such as natural language processing, machine learning, and pattern recognition to analyze large volumes of data.
However, the effectiveness of these systems remains uneven. Most existing tools focus on textual analysis, which is relatively well developed, but struggle to address the growing importance of multimedia content. Audio and video-based disinformation, including deepfakes, present significant challenges due to their complexity and the rapid evolution of manipulation techniques.
Another major limitation is the difficulty of interpreting context. AI systems often struggle to detect irony, sarcasm, and culturally specific references, which are frequently used in disinformation campaigns. This limits their ability to identify nuanced forms of manipulation and reduces their overall accuracy.
The study also points to challenges related to data quality and availability. Many AI models rely on large datasets for training, but high-quality, multilingual data is often scarce. This can lead to biases and reduce the effectiveness of detection systems in diverse linguistic and cultural contexts.
Trust emerges as a critical factor in the adoption of AI-based solutions. Users may be skeptical of automated systems, particularly when they lack transparency or clear explanations for their decisions. This can lead to resistance or even rejection of AI-generated fact-checking, limiting its impact.
The research reinforces the importance of hybrid approaches that combine human expertise with machine efficiency. Journalists, fact-checkers, and other information professionals play a crucial role in interpreting results, providing context, and ensuring accountability. Fully automated solutions are widely viewed as insufficient for addressing the complexity of disinformation.
Regulation, ethics, and literacy seen as critical frontlines
The study highlights the growing importance of regulatory and ethical frameworks in addressing AI-driven disinformation. Governments and international organizations are increasingly recognizing the need for coordinated action, but progress remains uneven.
The European Union's AI Act is identified as a key milestone, reflecting efforts to establish comprehensive rules for the development and use of artificial intelligence. However, the study notes that existing frameworks face significant challenges, particularly in addressing issues such as deepfakes, platform accountability, and cross-border disinformation campaigns.
The research calls for collaborative governance models that involve governments, technology companies, media organizations, and civil society. Such approaches aim to balance innovation with the protection of democratic values and individual rights.
Media literacy and education are emerging as essential components of the response to disinformation. The study finds that improving algorithmic literacy can enhance individuals' ability to recognize and critically evaluate AI-generated content. Educational initiatives that focus on understanding how AI systems work and how they can be misused are seen as critical for building resilience.
However, the integration of AI into education also presents challenges. Students and educators have differing perceptions of generative AI tools, with concerns about academic integrity, reliability, and the potential for misuse. This highlights the need for clear policies and ethical guidelines to support responsible use.
- FIRST PUBLISHED IN:
- Devdiscourse