🤖 AI Summary
Animal individual re-identification (ReID) suffers from performance degradation under pose-induced geometric deformations of fur/skin patterns. To address this, we propose an unsupervised geometric-aware texture unwrapping method that maps deformable biological textures onto a canonical UV space, thereby enhancing robustness in cross-pose feature matching. Our approach introduces the first self-supervised framework for UV parameterization without requiring annotated UV maps; it jointly estimates surface normals to enforce 3D-to-2D geometric consistency and achieve structure-preserving texture unfolding. The method is fully differentiable and can be seamlessly integrated end-to-end into existing ReID architectures. Evaluated on the SealRing and Leopard datasets—both exhibiting significant inter-pose and multi-view variability—our method achieves up to a 5.4% absolute improvement in re-identification accuracy. It notably strengthens matching stability across diverse poses and viewpoints, demonstrating superior generalization for fine-grained biometric identification under unconstrained conditions.
📝 Abstract
Existing individual re-identification methods often struggle with the deformable nature of animal fur or skin patterns which undergo geometric distortions due to body movement and posture changes. In this paper, we propose a geometry-aware texture mapping approach that unwarps pelage patterns, the unique markings found on an animal's skin or fur, into a canonical UV space, enabling more robust feature matching. Our method uses surface normal estimation to guide the unwrapping process while preserving the geometric consistency between the 3D surface and the 2D texture space. We focus on two challenging species: Saimaa ringed seals (Pusa hispida saimensis) and leopards (Panthera pardus). Both species have distinctive yet highly deformable fur patterns. By integrating our pattern-preserving UV mapping with existing re-identification techniques, we demonstrate improved accuracy across diverse poses and viewing angles. Our framework does not require ground truth UV annotations and can be trained in a self-supervised manner. Experiments on seal and leopard datasets show up to a 5.4% improvement in re-identification accuracy.