Hybrid-grained Feature Aggregation with Coarse-to-fine Language Guidance for Self-supervised Monocular Depth Estimation

๐Ÿ“… 2025-10-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address insufficient joint semantic-spatial modeling in self-supervised monocular depth estimation (MDE), this paper proposes a language-guided multi-granularity feature aggregation framework. Methodologically, it is the first to integrate coarse-grained global semantics from CLIP and fine-grained local visual priors from DINO; leverages contrastive language-aligned proxy tasks to disentangle and reconstruct depth-aware features; and introduces pixel-level textual supervision jointly with camera pose modeling to construct a plug-and-play depth encoder. Evaluated on KITTI, our approach achieves new state-of-the-art performance, reducing absolute relative error by 12.3%. Moreover, it significantly improves downstream birdโ€™s-eye-view (BEV) perception tasks. The implementation is publicly available.

Technology Category

Application Category

๐Ÿ“ Abstract
Current self-supervised monocular depth estimation (MDE) approaches encounter performance limitations due to insufficient semantic-spatial knowledge extraction. To address this challenge, we propose Hybrid-depth, a novel framework that systematically integrates foundation models (e.g., CLIP and DINO) to extract visual priors and acquire sufficient contextual information for MDE. Our approach introduces a coarse-to-fine progressive learning framework: 1) Firstly, we aggregate multi-grained features from CLIP (global semantics) and DINO (local spatial details) under contrastive language guidance. A proxy task comparing close-distant image patches is designed to enforce depth-aware feature alignment using text prompts; 2) Next, building on the coarse features, we integrate camera pose information and pixel-wise language alignment to refine depth predictions. This module seamlessly integrates with existing self-supervised MDE pipelines (e.g., Monodepth2, ManyDepth) as a plug-and-play depth encoder, enhancing continuous depth estimation. By aggregating CLIP's semantic context and DINO's spatial details through language guidance, our method effectively addresses feature granularity mismatches. Extensive experiments on the KITTI benchmark demonstrate that our method significantly outperforms SOTA methods across all metrics, which also indeed benefits downstream tasks like BEV perception. Code is available at https://github.com/Zhangwenyao1/Hybrid-depth.
Problem

Research questions and friction points this paper is trying to address.

Improving self-supervised monocular depth estimation performance limitations
Addressing insufficient semantic-spatial knowledge extraction in depth estimation
Solving feature granularity mismatches through hybrid feature aggregation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aggregates multi-grained features from CLIP and DINO
Uses coarse-to-fine progressive learning with language guidance
Integrates camera pose and pixel-wise language alignment
๐Ÿ”Ž Similar Papers
No similar papers found.
Wenyao Zhang
Wenyao Zhang
PhD Student, Shanghai Jiaotong University
Robot Learning๏ผŒ Representation Learning
H
Hongsi Liu
Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo, China
B
Bohan Li
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
J
Jiawei He
CASIA
Zekun Qi
Zekun Qi
Tsinghua University
Robotics3D Computer VisionVision Language Model
Yunnan Wang
Yunnan Wang
Department of Computer Science and Engineering, Shanghai Jiao Tong University
Computer VisionMultimodal Representation Learning
S
Shengyang Zhao
Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo, China
Xinqiang Yu
Xinqiang Yu
Galbot
Dexterous Manipulation3D visionEmbodied AI
W
Wenjun Zeng
Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo, China
X
Xin Jin
Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo, China