AI can miss critical animal behavior in naturalistic zoo settings

AI can miss critical animal behavior in naturalistic zoo settings
Representative image. Credit: ChatGPT

Red panda populations continue to decline due to habitat loss and fragmentation. Zoos play an essential role in maintaining genetic diversity and supporting species survival, but they face increasing regulatory pressure to demonstrate continuous and systematic animal welfare monitoring. Traditional observation methods, which rely heavily on human interpretation, are increasingly seen as insufficient to meet these standards.

Amidst this crisis, machine learning has emerged as a potential solution to these limitations, offering continuous, non-invasive monitoring that can operate across the full daily cycle of animals, including nocturnal and crepuscular activity patterns that are often missed by human observers.

The study, titled "Applicability of Machine Learning in Behavioural Monitoring of the Red Panda (Ailurus fulgens) in Zoos", published in Animals, investigates the feasibility of using machine learning models to continuously track and classify red panda behavior in a modern zoo enclosure. It highlights both the technological progress made in automated monitoring and the practical barriers that limit its reliability in naturalistic settings.

Machine learning offers breakthrough potential for continuous welfare monitoring

The study highlights the growing role of AI in biological and ecological research, particularly in automating data-intensive tasks such as behavior tracking. By using video footage captured from multiple cameras in a red panda enclosure, the researchers trained a machine learning system to detect the animal and classify its behaviors into categories such as locomotion, resting, consumption, and grooming.

The system relied on a combination of object detection and behavior categorization models, with the latter demonstrating strong performance once the animal was successfully identified. The behavior classification model achieved an overall accuracy of 76 percent, with particularly strong results in identifying movement and resting patterns.

This level of accuracy highlights the capability of machine learning to interpret animal behavior in a structured and repeatable way, addressing one of the key limitations of human observation, which is subjectivity. Even with trained observers, discrepancies often arise in how behaviors are categorized, making it difficult to ensure consistency across studies and institutions.

Another major advantage of automated systems is their ability to operate continuously. Red pandas are most active during twilight and nighttime hours, periods that are typically underrepresented in manual observation datasets. By capturing a full 24-hour behavioral profile, machine learning systems can provide a more complete understanding of activity patterns, enabling earlier detection of behavioral changes that may signal health or welfare issues.

This continuous monitoring capability aligns with evolving global standards for animal welfare, which emphasize systematic and ongoing assessment rather than periodic checks. The study suggests that, if refined, machine learning tools could function as an early warning system for zookeepers, allowing them to detect subtle changes in behavior before they escalate into more serious problems.

Detection failures and Environmental complexity limit real-world accuracy

Despite strong performance in behavior classification, the study identifies a critical bottleneck in the object detection stage of the system. The machine learning model frequently failed to detect the red panda in the first place, particularly in areas of the enclosure with dense vegetation, complex structures, or poor visibility.

This limitation had a cascading effect on the overall system. Because behavior classification depends on successful detection, any missed detections resulted in gaps in the data. The study found that 76 percent of the total observation time was classified as unidentifiable by the automated system, compared to just over 40 percent in manual observations.

These gaps were most pronounced in behaviors such as locomotion and resting, where the model significantly underestimated activity levels. In contrast, behaviors like feeding and grooming showed closer alignment with manual observations, suggesting that certain activities are easier to detect and classify than others.

The root cause of these detection failures lies in the naturalistic design of modern zoo enclosures. Features such as trees, rocks, and foliage are intended to replicate the animal's natural habitat and improve welfare, but they also create visual complexity that challenges computer vision systems. The red panda's natural camouflage further compounds the problem, making it difficult for the model to distinguish the animal from its surroundings.

The study also highlights how camera placement influences detection performance. Areas with fewer obstructions produced higher confidence scores and more reliable tracking, while zones with dense vegetation or distant viewpoints resulted in lower accuracy. This suggests that the physical layout of monitoring systems plays a critical role in determining the success of automated analysis.

These findings reveal a fundamental tension between animal welfare design and technological monitoring. Enclosures that are optimized for animal well-being may not be optimal for machine learning systems, creating a trade-off that must be carefully managed in future implementations.

Future advances could unlock reliable AI-based welfare systems

While the current system falls short of fully replacing manual observation, the study outlines several pathways for improving performance and achieving reliable automated monitoring. One of the most promising approaches is the integration of alternative detection methods that focus on movement rather than visual features.

Techniques such as background subtraction could help isolate the animal from its environment by detecting changes in pixel patterns, making it easier to track movement even in visually complex settings. However, these methods may struggle with stationary behaviors, indicating that hybrid approaches will likely be necessary.

The researchers also point to pose estimation as a potential solution. Instead of relying on full-body detection, pose estimation models track specific anatomical points such as limbs and joints, allowing behavior to be inferred even when the animal is partially obscured. This approach could significantly improve detection rates in environments where full visibility is rare.

Hardware improvements are equally important. The study suggests that placing cameras closer to key areas such as feeding stations and resting sites could reduce occlusion and improve detection consistency. The use of thermal imaging is also highlighted as a way to overcome camouflage and enable reliable monitoring during nighttime activity.

Another challenge identified is the computational cost of processing large volumes of video data. Current systems require significant processing power, limiting their ability to operate in real time. Addressing this issue will be critical for scaling automated monitoring systems across multiple enclosures and institutions.

The study also calls for more diverse training datasets. Models trained on a single individual or environment may struggle to generalize to other animals or settings, reducing their broader applicability. Expanding datasets to include multiple individuals, seasons, and environmental conditions could improve model robustness and reliability.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback