🤖 AI Summary
This work addresses the challenge of obtaining accurate, pixel-aligned metric depth from RGB-D cameras in scenarios involving specular or textureless surfaces. To this end, the authors propose LingBot-Depth, a novel model that, for the first time, formulates depth error as a “mask” signal reflecting geometric ambiguity and integrates visual context for masked depth learning. A large-scale dataset comprising three million RGB-depth image pairs is constructed through an automated data-cleaning pipeline, enabling scalable training. The proposed method surpasses high-end RGB-D cameras in both depth accuracy and pixel coverage, and demonstrates effective cross-modal representations across multiple downstream tasks.
📝 Abstract
Spatial visual perception is a fundamental requirement in physical-world applications like autonomous driving and robotic manipulation, driven by the need to interact with 3D environments. Capturing pixel-aligned metric depth using RGB-D cameras would be the most viable way, yet it usually faces obstacles posed by hardware limitations and challenging imaging conditions, especially in the presence of specular or texture-less surfaces. In this work, we argue that the inaccuracies from depth sensors can be viewed as"masked"signals that inherently reflect underlying geometric ambiguities. Building on this motivation, we present LingBot-Depth, a depth completion model which leverages visual context to refine depth maps through masked depth modeling and incorporates an automated data curation pipeline for scalable training. It is encouraging to see that our model outperforms top-tier RGB-D cameras in terms of both depth precision and pixel coverage. Experimental results on a range of downstream tasks further suggest that LingBot-Depth offers an aligned latent representation across RGB and depth modalities. We release the code, checkpoint, and 3M RGB-depth pairs (including 2M real data and 1M simulated data) to the community of spatial perception.