🤖 AI Summary
This paper addresses the challenge of background interference undermining model generalization in wildlife behavior recognition. We introduce the first foreground-background decoupled dataset for wild chimpanzee behavior—comprising 20 hours of video from over 350 camera trap sites—where each foreground behavior clip is paired with a background-only clip captured at the identical location and time, enabling rigorous in-distribution and out-of-distribution (OOD) evaluation. We propose a novel benchmark protocol based on overlapping versus disjoint camera site splits, along with a background duration analysis dimension, to quantitatively characterize background influence on cross-scene generalization. To mitigate background bias, we design a latent-space normalization technique. Evaluated on both CNN and Transformer backbones, our method improves OOD mAP by 5.42% and 3.75%, respectively, demonstrating that background frame count critically affects recognition robustness.
📝 Abstract
Computer vision analysis of camera trap video footage is essential for wildlife conservation, as captured behaviours offer some of the earliest indicators of changes in population health. Recently, several high-impact animal behaviour datasets and methods have been introduced to encourage their use; however, the role of behaviour-correlated background information and its significant effect on out-of-distribution generalisation remain unexplored. In response, we present the PanAf-FGBG dataset, featuring 20 hours of wild chimpanzee behaviours, recorded at over 350 individual camera locations. Uniquely, it pairs every video with a chimpanzee (referred to as a foreground video) with a corresponding background video (with no chimpanzee) from the same camera location. We present two views of the dataset: one with overlapping camera locations and one with disjoint locations. This setup enables, for the first time, direct evaluation of in-distribution and out-of-distribution conditions, and for the impact of backgrounds on behaviour recognition models to be quantified. All clips come with rich behavioural annotations and metadata including unique camera IDs and detailed textual scene descriptions. Additionally, we establish several baselines and present a highly effective latent-space normalisation technique that boosts out-of-distribution performance by +5.42% mAP for convolutional and +3.75% mAP for transformer-based models. Finally, we provide an in-depth analysis on the role of backgrounds in out-of-distribution behaviour recognition, including the so far unexplored impact of background durations (i.e., the count of background frames within foreground videos).