Can AI help treat gaming disorder? Big potential and gaps
Internet Gaming Disorder is emerging as a growing public health concern, particularly among adolescents and young adults. Mental health systems are struggling to keep pace with demand for early detection, sustained treatment, and relapse prevention, raising questions about whether digital tools can responsibly extend the reach of clinical care.
A narrative review published in AI Med examines these concerns. Titled LLMss in the Assessment and Care of Internet Gaming Disorder, the study evaluates how large language models could support screening, monitoring, and treatment of gaming disorder, while warning that current evidence remains too limited for unsupervised clinical use.
AI screening shows promise but remains experimental
Internet Gaming Disorder often develops gradually, with individuals normalizing excessive gaming long before clinical thresholds are crossed. Traditional screening tools rely on self-report questionnaires that many users avoid due to stigma or lack of awareness. LLMs, by contrast, can embed screening into natural conversation, lowering barriers to disclosure.
The review highlights emerging research showing that transformer-based language models can analyze open-ended text responses to moderately predict standardized IGD severity scores. These models detect linguistic patterns associated with preoccupation, withdrawal, and loss of control. While performance remains below that of neurophysiological classifiers, text-based screening offers scalability unmatched by clinical interviews or hardware-dependent diagnostics.
The authors synthesize evidence from multimodal machine learning studies that combine language analysis with physiological data such as EEG, fNIRS, or fMRI. These approaches achieve higher classification accuracy when distinguishing individuals with IGD from healthy controls or those with other addictions. However, they require specialized equipment and remain impractical for population-level screening.
The review notes that all existing IGD-specific AI models are research prototypes. Sample sizes are small, demographic diversity is limited, and external validation is absent. None of the reviewed systems are approved diagnostic tools, and none should be used in isolation to confirm IGD. The authors argue that, at present, AI screening should function as triage rather than diagnosis, flagging individuals who may benefit from human assessment.
Treatment potential tempered by lack of trials
While screening applications show early feasibility, therapeutic use of LLMs remains largely theoretical in the context of Internet Gaming Disorder. Cognitive–behavioral therapy remains the gold standard for IGD treatment, but access is limited by cost, stigma, and clinician shortages. This has fueled interest in AI-delivered interventions that can provide continuous, personalized support.
The review examines evidence from adjacent domains such as depression, anxiety, and substance use disorders, where chatbot-based interventions have demonstrated modest symptom reductions. These systems can deliver structured cognitive exercises, motivational interviewing techniques, and psychoeducation at scale. However, the authors are explicit that no randomized controlled trials have yet tested LLM–driven therapy specifically for Internet Gaming Disorder.
As a result, all proposed therapeutic applications for IGD remain extrapolations. The review cautions that gaming disorder has unique features, including strong social reinforcement, platform design incentives, and developmental vulnerabilities, that may limit transferability from other mental health conditions. Without IGD-specific trials, claims of therapeutic effectiveness remain speculative.
The authors also warn of the risk of over-reliance. Always-available AI systems may discourage users from seeking human care, particularly among younger populations already comfortable confiding in digital tools. In the absence of robust escalation mechanisms, this could delay intervention in severe cases involving comorbid depression, anxiety, or suicidal ideation.
The study notes that any future therapeutic deployment must be hybrid by design. LLMs may support skill-building, monitoring, and engagement, but licensed clinicians must retain oversight, particularly for diagnosis, treatment planning, and crisis response.
Ethics, governance, and the limits of automation
Internet Gaming Disorder disproportionately affects minors and young adults, making data protection and informed consent especially sensitive. Language models trained on conversational data, gaming logs, or biometric inputs risk exposing deeply personal information if safeguards fail.
Algorithmic bias is another major concern. Most existing datasets skew toward young male participants from limited geographic regions. Models trained on such data may misclassify or overlook symptoms in women, older users, or culturally distinct populations. Without diverse training data and external validation, AI systems risk reinforcing inequities in access to care.
The authors also address the opacity of LLMs. Clinical trust depends on understanding how decisions are made, yet many state-of-the-art models operate as black boxes. In mental health contexts, lack of interpretability complicates accountability and informed decision-making, particularly when recommendations influence behavior over time.
Crisis handling represents a further unresolved challenge. Studies in broader mental health applications show inconsistent performance in detecting and responding to suicidal ideation. For IGD, where comorbidity with depression is common, failure to escalate appropriately could have serious consequences. The review argues that crisis detection must be standardized, auditable, and tightly integrated with human support systems.
From a regulatory perspective, the authors situate IGD-related AI within emerging frameworks such as the EU Artificial Intelligence Act and international guidelines for trustworthy AI in healthcare. They argue that compliance must go beyond formal regulation to include continuous monitoring, bias testing, and post-deployment evaluation.
A cautious path forward
The review outlines a roadmap for responsible development grounded in evidence, ethics, and human-centered care.
Hybrid human–AI models emerge as the most viable approach. In such systems, AI handles low-risk, high-frequency tasks such as screening conversations, psychoeducation, and progress summaries, while clinicians manage diagnosis, complex therapy, and crisis intervention. This division of labor could expand reach without sacrificing safety.
The authors also point to future opportunities in multimodal integration. Combining language models with wearable data or gaming platform APIs could enable earlier detection of harmful patterns, provided users give informed consent and data minimization principles are enforced. Longitudinal studies integrating behavioral, linguistic, and physiological signals may improve understanding of IGD progression.
Additionally, the review calls for rigorous clinical trials. Randomized controlled studies with diverse populations are needed to establish whether AI-supported interventions reduce gaming time, improve functioning, and sustain recovery. Without such evidence, AI applications should remain adjunctive rather than standalone.
- FIRST PUBLISHED IN:
- Devdiscourse