🤖 AI Summary
This work addresses the challenges of individual animal identification in long videos, where existing approaches either rely heavily on labor-intensive manual annotations or suffer from high computational costs and poor scalability to long sequences in self-supervised settings. The authors propose a label-free global clustering framework that requires only bounding box detections and the total number of individuals. By leveraging frame-pair sampling and a bootstrapping mechanism to generate pseudo-labels, the method innovatively incorporates a fixed-cardinality assumption with the Hungarian algorithm for intra-batch matching. Using a frozen pre-trained backbone and binary cross-entropy loss, it learns discriminative features while drastically reducing memory consumption (under 1 GB per batch) and avoiding temporal error propagation. The approach achieves over 97% accuracy on both the 3D-POP pigeon and an 8-calf dataset, matching or surpassing supervised methods that require annotations spanning thousands of frames.
📝 Abstract
Identifying individual animals in long-duration videos is essential for behavioral ecology, wildlife monitoring, and livestock management. Traditional methods require extensive manual annotation, while existing self-supervised approaches are computationally demanding and ill-suited for long sequences due to memory constraints and temporal error propagation. We introduce a highly efficient, self-supervised method that reframes animal identification as a global clustering task rather than a sequential tracking problem. Our approach assumes a known, fixed number of individuals within a single video -- a common scenario in practice -- and requires only bounding box detections and the total count. By sampling pairs of frames, using a frozen pre-trained backbone, and employing a self-bootstrapping mechanism with the Hungarian algorithm for in-batch pseudo-label assignment, our method learns discriminative features without identity labels. We adapt a Binary Cross Entropy loss from vision-language models, enabling state-of-the-art accuracy ($>$97\%) while consuming less than 1 GB of GPU memory per batch -- an order of magnitude less than standard contrastive methods. Evaluated on challenging real-world datasets (3D-POP pigeons and 8-calves feeding videos), our framework matches or surpasses supervised baselines trained on over 1,000 labeled frames, effectively removing the manual annotation bottleneck. This work enables practical, high-accuracy animal identification on consumer-grade hardware, with broad applicability in resource-constrained research settings. All code written for this paper are \href{https://huggingface.co/datasets/tonyFang04/8-calves}{here}.