DepthLM: Metric Depth From Vision Language Models

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) significantly underperform specialized vision-only models on geometric 3D understanding tasks—e.g., monocular depth estimation—primarily due to their lack of geometric priors and task-specific architectural inductive biases. Method: We investigate whether expert-level accuracy can be achieved without modifying VLM architectures or loss functions, solely through carefully designed supervision. To this end, we propose DepthLM, a text-supervised fine-tuning framework leveraging sparse textual depth labels. It introduces two key innovations: visual prompting to localize depth-relevant regions and intrinsic-conditioned augmentation to resolve pixel-reference ambiguity and cross-dataset camera parameter ambiguity. Results: DepthLM achieves 2.1× higher accuracy than prior SOTA VLMs on NYUv2, matching the performance of leading vision-only models for the first time. It inherently suppresses boundary over-smoothing and floating-point artifacts. The method is lightweight and generalizable to diverse 3D perception tasks.

Technology Category

Application Category

📝 Abstract
Vision language models (VLMs) can flexibly address various vision tasks through text interactions. Although successful in semantic understanding, state-of-the-art VLMs including GPT-5 still struggle in understanding 3D from 2D inputs. On the other hand, expert pure vision models achieve super-human accuracy in metric depth estimation, a key 3D understanding task. However, they require task-specific architectures and losses. Such difference motivates us to ask: Can VLMs reach expert-level accuracy without architecture or loss change? We take per-pixel metric depth estimation as the representative task and show that the answer is yes! Surprisingly, comprehensive analysis shows that text-based supervised-finetuning with sparse labels is sufficient for VLMs to unlock strong 3D understanding, no dense prediction head or complex regression/regularization loss is needed. The bottleneck for VLMs lies actually in pixel reference and cross-dataset camera ambiguity, which we address through visual prompting and intrinsic-conditioned augmentation. With much smaller models, our method DepthLM surpasses the accuracy of most advanced VLMs by over 2x, making VLMs for the first time comparable with pure vision models. Interestingly, without explicit enforcement during training, VLMs trained with DepthLM naturally avoids over-smoothing, having much fewer flying points at boundary regions than pure vision models. The simplicity of DepthLM also enables a single VLM to cover various 3D tasks beyond metric depth. Our code and model will be released at the link below.
Problem

Research questions and friction points this paper is trying to address.

VLMs struggle with 3D understanding from 2D inputs
Expert models require specialized architectures for depth estimation
Pixel reference and camera ambiguity hinder VLM 3D performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse supervised-finetuning for depth estimation
Visual prompting resolves pixel reference ambiguity
Intrinsic-conditioned augmentation handles camera parameter variation
🔎 Similar Papers
No similar papers found.
Z
Zhipeng Cai
Meta
Ching-Feng Yeh
Ching-Feng Yeh
Meta
H
Hu Xu
Meta
Z
Zhuang Liu
Princeton University
G
Gregory Meyer
Meta
X
Xinjie Lei
Meta
Changsheng Zhao
Changsheng Zhao
Meta AI
Machine LearningNatural Language Processing
S
Shang-Wen Li
Meta
Vikas Chandra
Vikas Chandra
Meta
AI Research
Yangyang Shi
Yangyang Shi
Meta
natural language processinglanguage modelingspeech recognition