🤖 AI Summary
This study addresses the challenge of accurately detecting and counting reindeer in Arctic aerial imagery, where complex backgrounds, sparse targets, and significant scale variations hinder conventional approaches. To overcome severe annotation scarcity and class imbalance, the authors propose a weakly supervised patch-level pretraining strategy that initializes the HerdNet detection network using only coarse binary labels indicating whether image patches are empty or non-empty. This approach innovatively adapts weakly supervised pretraining to wildlife detection and, combined with transfer learning, substantially enhances model robustness. Evaluated on multi-herd test sets from 2017 and 2019, the method achieves F1 scores of 93.7% and 92.6%, respectively, significantly outperforming ImageNet-pretrained baselines in both positive-sample detection and whole-image counting accuracy.
📝 Abstract
Caribou across the Arctic has declined in recent decades, motivating scalable and accurate monitoring approaches to guide evidence-based conservation actions and policy decisions. Manual interpretation from this imagery is labor-intensive and error-prone, underscoring the need for automatic and reliable detection across varying scenes. Yet, such automatic detection is challenging due to severe background heterogeneity, dominant empty terrain (class imbalance), small or occluded targets, and wide variation in density and scale. To make the detection model (HerdNet) more robust to these challenges, a weakly supervised patch-level pretraining based on a detection network's architecture is proposed. The detection dataset includes five caribou herds distributed across Alaska. By learning from empty vs. non-empty labels in this dataset, the approach produces early weakly supervised knowledge for enhanced detection compared to HerdNet, which is initialized from generic weights. Accordingly, the patch-based pretrain network attained high accuracy on multi-herd imagery (2017) and on an independent year's (2019) test sets (F1: 93.7%/92.6%, respectively), enabling reliable mapping of regions containing animals to facilitate manual counting on large aerial imagery. Transferred to detection, initialization from weakly supervised pretraining yielded consistent gains over ImageNet weights on both positive patches (F1: 92.6%/93.5% vs. 89.3%/88.6%), and full-image counting (F1: 95.5%/93.3% vs. 91.5%/90.4%). Remaining limitations are false positives from animal-like background clutter and false negatives related to low animal density occlusions. Overall, pretraining on coarse labels prior to detection makes it possible to rely on weakly-supervised pretrained weights even when labeled data are limited, achieving results comparable to generic-weight initialization.