🤖 AI Summary
Existing animal depth estimation datasets generally lack real-world metric scale information, hindering the advancement of high-precision multimodal perception and 3D reconstruction. To address this limitation, this work presents the first cross-domain, multimodal wildlife dataset that synchronously captures RGB images and LiDAR point clouds with accurate metric scale, supporting tasks including depth estimation, behavior detection, and 3D reconstruction. Leveraging this dataset, the proposed multimodal fusion strategy reduces the RMSE of depth estimation by 10% and improves the Chamfer distance in 3D reconstruction by 12%, significantly advancing research in embodied intelligence and 3D scene understanding in wild environments.
📝 Abstract
Depth estimation and 3D reconstruction have been extensively studied as core topics in computer vision. Starting from rigid objects with relatively simple geometric shapes, such as vehicles, the research has expanded to address general objects, including challenging deformable objects, such as humans and animals. However, for the animal, in particular, the majority of existing models are trained based on datasets without metric scale, which can help validate image-only models. To address this limitation, we present WildDepth, a multimodal dataset and benchmark suite for depth estimation, behavior detection, and 3D reconstruction from diverse categories of animals ranging from domestic to wild environments with synchronized RGB and LiDAR. Experimental results show that the use of multi-modal data improves depth reliability by up to 10% RMSE, while RGB-LiDAR fusion enhances 3D reconstruction fidelity by 12% in Chamfer distance. By releasing WildDepth and its benchmarks, we aim to foster robust multimodal perception systems that generalize across domains.