🤖 AI Summary
This work addresses the inherent scale-depth ambiguity in monocular video, where continuous scale variations due to depth changes are not explicitly modeled by existing methods. To resolve this, we propose Depth-conditioned Scale Convolution (DcSConv), a novel framework that, for the first time, treats adaptive scale adjustment of convolutional filters as a core mechanism. Leveraging a depth-scale prior relationship, DcSConv dynamically modulates receptive fields to better capture structural information. We further introduce a DcS-aware fusion module that integrates features from conventional convolutions and DcSConv in a plug-and-play manner, emphasizing scale selection over local deformation modeling. Extensive experiments demonstrate significant performance gains on the KITTI benchmark, with up to an 11.6% reduction in the SqRel error, and consistent improvements across multiple baseline models.
📝 Abstract
Self-supervised monocular depth estimation (MDE) has received increasing interests in the last few years. The objects in the scene, including the object size and relationship among different objects, are the main clues to extract the scene structure. However, previous works lack the explicit handling of the changing sizes of the object due to the change of its depth. Especially in a monocular video, the size of the same object is continuously changed, resulting in size and depth ambiguity. To address this problem, we propose a Depth-converted-Scale Convolution (DcSConv) enhanced monocular depth estimation framework, by incorporating the prior relationship between the object depth and object scale to extract features from appropriate scales of the convolution receptive field. The proposed DcSConv focuses on the adaptive scale of the convolution filter instead of the local deformation of its shape. It establishes that the scale of the convolution filter matters no less (or even more in the evaluated task) than its local deformation. Moreover, a Depth-converted-Scale aware Fusion (DcS-F) is developed to adaptively fuse the DcSConv features and the conventional convolution features. Our DcSConv enhanced monocular depth estimation framework can be applied on top of existing CNN based methods as a plug-and-play module to enhance the conventional convolution block. Extensive experiments with different baselines have been conducted on the KITTI benchmark and our method achieves the best results with an improvement up to 11.6% in terms of SqRel reduction. Ablation study also validates the effectiveness of each proposed module.