Adaptive Depth-converted-Scale Convolution for Self-supervised Monocular Depth Estimation

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inherent scale-depth ambiguity in monocular video, where continuous scale variations due to depth changes are not explicitly modeled by existing methods. To resolve this, we propose Depth-conditioned Scale Convolution (DcSConv), a novel framework that, for the first time, treats adaptive scale adjustment of convolutional filters as a core mechanism. Leveraging a depth-scale prior relationship, DcSConv dynamically modulates receptive fields to better capture structural information. We further introduce a DcS-aware fusion module that integrates features from conventional convolutions and DcSConv in a plug-and-play manner, emphasizing scale selection over local deformation modeling. Extensive experiments demonstrate significant performance gains on the KITTI benchmark, with up to an 11.6% reduction in the SqRel error, and consistent improvements across multiple baseline models.
📝 Abstract
Self-supervised monocular depth estimation (MDE) has received increasing interests in the last few years. The objects in the scene, including the object size and relationship among different objects, are the main clues to extract the scene structure. However, previous works lack the explicit handling of the changing sizes of the object due to the change of its depth. Especially in a monocular video, the size of the same object is continuously changed, resulting in size and depth ambiguity. To address this problem, we propose a Depth-converted-Scale Convolution (DcSConv) enhanced monocular depth estimation framework, by incorporating the prior relationship between the object depth and object scale to extract features from appropriate scales of the convolution receptive field. The proposed DcSConv focuses on the adaptive scale of the convolution filter instead of the local deformation of its shape. It establishes that the scale of the convolution filter matters no less (or even more in the evaluated task) than its local deformation. Moreover, a Depth-converted-Scale aware Fusion (DcS-F) is developed to adaptively fuse the DcSConv features and the conventional convolution features. Our DcSConv enhanced monocular depth estimation framework can be applied on top of existing CNN based methods as a plug-and-play module to enhance the conventional convolution block. Extensive experiments with different baselines have been conducted on the KITTI benchmark and our method achieves the best results with an improvement up to 11.6% in terms of SqRel reduction. Ablation study also validates the effectiveness of each proposed module.
Problem

Research questions and friction points this paper is trying to address.

monocular depth estimation
depth-scale ambiguity
object scale variation
self-supervised learning
convolution receptive field
Innovation

Methods, ideas, or system contributions that make the work stand out.

Depth-converted-Scale Convolution
self-supervised monocular depth estimation
adaptive scale
receptive field
plug-and-play module
Yanbo Gao
Yanbo Gao
Shandong University
Video Coding3D Video ProcessingDeep Learning
H
Huibin Bai
School of Control Science and Engineering, Shandong University; and Key Laboratory of Machine Intelligence and System Control, Ministry of Education, Jinan 250100, China
H
Huasong Zhou
School of Control Science and Engineering, Shandong University; and Key Laboratory of Machine Intelligence and System Control, Ministry of Education, Jinan 250100, China
Xingyu Gao
Xingyu Gao
Professor of Computer Science, Chinese Academy of Sciences
Machine LearningComputer VisionMultimediaUbiquitous Computing
Shuai Li
Shuai Li
Shandong University
IndRNNimage/video coding3D video processingcomputer visiondeep learning
X
Xun Cai
School of Software, Shandong University, Jinan 250100, China; and Shandong University-WeiHai Research Institute of Industrial Technology, Weihai 264209, China
H
Hui Yuan
School of Control Science and Engineering, Shandong University; and Key Laboratory of Machine Intelligence and System Control, Ministry of Education, Jinan 250100, China
W
Wei Hua
Research Institute of Interdisciplinary Innovation, Zhejiang Lab, Hangzhou, China
T
Tian Xie
Research Institute of Interdisciplinary Innovation, Zhejiang Lab, Hangzhou, China