🤖 AI Summary
Monocular depth estimation yields only relative depth, limiting its direct applicability to robotic navigation and manipulation. To address this, this paper proposes an online metric depth recovery method grounded in robotic kinematic constraints. It innovatively models the robot’s ego-motion as a dynamic “ruler,” jointly leveraging LSTM-based real-time regression of relative depth and Bayesian probabilistic filtering to enforce scale consistency across temporally adjacent frames. The approach requires only a monocular camera and the robot’s kinematic model—no additional hardware or offline calibration is needed. Evaluated on a real robotic platform, the method reduces absolute depth error by 22.1% and improves success rates of downstream grasping and navigation tasks by 52%, significantly outperforming existing monocular metric depth estimation methods.
📝 Abstract
Depth perception is essential for a robot's spatial and geometric understanding of its environment, with many tasks traditionally relying on hardware-based depth sensors like RGB-D or stereo cameras. However, these sensors face practical limitations, including issues with transparent and reflective objects, high costs, calibration complexity, spatial and energy constraints, and increased failure rates in compound systems. While monocular depth estimation methods offer a cost-effective and simpler alternative, their adoption in robotics is limited due to their output of relative rather than metric depth, which is crucial for robotics applications. In this paper, we propose a method that utilizes a single calibrated camera, enabling the robot to act as a ``measuring stick"to convert relative depth estimates into metric depth in real-time as tasks are performed. Our approach employs an LSTM-based metric depth regressor, trained online and refined through probabilistic filtering, to accurately restore the metric depth across the monocular depth map, particularly in areas proximal to the robot's motion. Experiments with real robots demonstrate that our method significantly outperforms current state-of-the-art monocular metric depth estimation techniques, achieving a 22.1% reduction in depth error and a 52% increase in success rate for a downstream task.