AI brings new hope to Africa’s health crisis; skills shortages slow real-world impact

AI brings new hope to Africa’s health crisis; skills shortages slow real-world impact
Representative image. Credit: ChatGPT

Artificial intelligence (AI) has made significant strides in infectious disease prediction, surveillance, and diagnosis, with rapid growth in research activity since 2019. Machine learning and deep learning models are now widely used to classify disease cases, forecast outbreaks, and analyze epidemiological trends, often achieving high levels of accuracy in controlled settings, according to a new review that highlights both the transformative potential of AI-driven disease modelling and the systemic barriers that continue to limit its effectiveness, particularly in regions facing the highest disease burdens.

The study, titled "The use of artificial intelligence based modelling techniques in One Health-related infectious disease studies in Sub-Saharan Africa: a review," published in Frontiers in Artificial Intelligence, analyzes 62 peer-reviewed studies to assess how AI is being deployed across human, animal, and environmental health systems.

While the study focuses on sub-Saharan Africa, its findings reflect broader global patterns, where technological progress often outpaces the systems needed to support it.

AI advances reshape disease prediction but remain narrowly focused

Across the analyzed studies, AI applications are heavily concentrated in classification tasks, accounting for nearly two-thirds of all use cases. These models are primarily used to determine infection status, identify high-risk populations, and detect pathogens in clinical or environmental data. Regression-based models, which predict trends such as disease incidence, form a smaller but still significant share of applications.

Deep learning techniques, particularly convolutional neural networks, consistently demonstrate the highest performance, often reaching near-perfect accuracy in image-based diagnostics such as malaria parasite detection and tuberculosis screening. Ensemble models like Random Forest and XGBoost also play a dominant role, offering reliable results across a range of predictive tasks.

However, the study highlights a critical limitation. Despite the technical sophistication of these models, most applications remain narrowly focused on specific diseases and datasets. Malaria alone accounts for nearly a quarter of all studies, followed by HIV and COVID-19, while many other infectious diseases receive limited attention.

This concentration creates gaps in preparedness, particularly for emerging or zoonotic diseases that require integrated analysis across human, animal, and environmental systems. It also reflects the uneven availability of data, which shapes where and how AI can be applied.

One Health integration lags as human data dominates

The One Health approach focuses on the interconnectedness of human, animal, and environmental health. Despite its importance in understanding disease transmission, most AI models rely almost exclusively on human health data.

More than three-quarters of the studies analyzed use only human datasets, with minimal incorporation of environmental or animal data. Only a small fraction of research combines all three components, limiting the ability of AI systems to capture the full complexity of disease dynamics.

This imbalance has significant implications. Many infectious diseases, particularly in developing regions, are influenced by environmental factors such as climate variability and ecological changes, as well as interactions between humans and animals. Without integrating these dimensions, AI models risk overlooking key drivers of disease emergence and spread.

The study points to structural reasons for this gap. Human health data is more readily available and better standardized, while environmental and animal data are often fragmented, difficult to access, or collected using incompatible systems. As a result, researchers tend to focus on datasets that are easier to use, even if they provide an incomplete picture.

This limitation extends beyond regional contexts. Globally, efforts to implement integrated health surveillance systems face similar challenges, including data silos, lack of interoperability, and insufficient coordination across sectors.

Infrastructure, data gaps, and skills shortages slow real-world impact

While AI models show strong performance in research settings, their deployment in real-world health systems remains constrained by multiple systemic barriers. The study identifies infrastructure limitations, data governance issues, and shortages of skilled professionals as key obstacles to scaling AI-driven solutions.

Digital infrastructure remains uneven, particularly in rural and low-resource settings. Many health facilities lack reliable internet access, sufficient computing power, or consistent electricity supply, making it difficult to implement advanced AI tools that require continuous data processing.

Data challenges further complicate adoption. The review finds that most available datasets are short-term and disease-specific, with limited access to long-term, integrated data that could support more robust predictive models. The absence of comprehensive datasets combining clinical, environmental, and animal health information significantly restricts the potential of AI within a One Health framework.

In addition, concerns around data privacy, governance, and algorithmic bias pose risks to trust and equity. AI systems trained on non-local data may produce inaccurate or biased predictions when applied in different contexts, highlighting the need for region-specific datasets and transparent regulatory frameworks.

Human capital is another major obstacle. The study emphasizes a shortage of professionals with the skills needed to develop, interpret, and maintain AI systems. This gap not only limits the adoption of AI technologies but also creates dependence on external expertise, raising questions about long-term sustainability.

Uneven global adoption reveals widening digital health divide

The study's findings reveal a broader pattern of uneven AI adoption, with research and implementation concentrated in a limited number of regions and institutions. Within the study's focus area, countries with stronger research infrastructure and digital systems dominate AI-related work, while others remain underrepresented.

This uneven distribution reflects global disparities in access to technology, funding, and expertise. In more advanced settings, integrated data systems and institutional collaboration enable the development of sophisticated AI models that support real-time disease surveillance and response.

On the other hand, regions with limited infrastructure face significant barriers to adoption, even as they bear a disproportionate burden of infectious diseases. This mismatch raises concerns about a widening digital health divide, where the benefits of AI are not evenly distributed.

The study also identifies emerging opportunities that could help bridge these gaps. Expanding mobile connectivity, cloud-based computing, and low-cost digital tools offer new pathways for deploying AI in resource-constrained environments. Smartphone-based diagnostics, for example, have shown promise in enabling real-time disease detection without the need for specialized equipment.

AI's future in disease control depends on integration and investment

AI has the potential to transform infectious disease control, its impact will depend on addressing the structural challenges that limit its use. Investments in digital infrastructure, data integration, and workforce development are essential to unlock the full benefits of AI-driven health systems.

Equally important is the need to strengthen the One Health approach by integrating human, animal, and environmental data into unified models. This requires coordinated efforts across sectors, as well as the development of interoperable data systems that can support comprehensive analysis.

The study also highlights the importance of balancing technological innovation with practical considerations. While complex AI models offer high accuracy, simpler and more interpretable approaches may be more suitable in low-resource settings where computational capacity is limited.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback