🤖 AI Summary
Vision-language models (VLMs) significantly underperform specialized vision-only models on geometric 3D understanding tasks—e.g., monocular depth estimation—primarily due to their lack of geometric priors and task-specific architectural inductive biases. Method: We investigate whether expert-level accuracy can be achieved without modifying VLM architectures or loss functions, solely through carefully designed supervision. To this end, we propose DepthLM, a text-supervised fine-tuning framework leveraging sparse textual depth labels. It introduces two key innovations: visual prompting to localize depth-relevant regions and intrinsic-conditioned augmentation to resolve pixel-reference ambiguity and cross-dataset camera parameter ambiguity. Results: DepthLM achieves 2.1× higher accuracy than prior SOTA VLMs on NYUv2, matching the performance of leading vision-only models for the first time. It inherently suppresses boundary over-smoothing and floating-point artifacts. The method is lightweight and generalizable to diverse 3D perception tasks.
📝 Abstract
Vision language models (VLMs) can flexibly address various vision tasks through text interactions. Although successful in semantic understanding, state-of-the-art VLMs including GPT-5 still struggle in understanding 3D from 2D inputs. On the other hand, expert pure vision models achieve super-human accuracy in metric depth estimation, a key 3D understanding task. However, they require task-specific architectures and losses. Such difference motivates us to ask: Can VLMs reach expert-level accuracy without architecture or loss change? We take per-pixel metric depth estimation as the representative task and show that the answer is yes! Surprisingly, comprehensive analysis shows that text-based supervised-finetuning with sparse labels is sufficient for VLMs to unlock strong 3D understanding, no dense prediction head or complex regression/regularization loss is needed. The bottleneck for VLMs lies actually in pixel reference and cross-dataset camera ambiguity, which we address through visual prompting and intrinsic-conditioned augmentation. With much smaller models, our method DepthLM surpasses the accuracy of most advanced VLMs by over 2x, making VLMs for the first time comparable with pure vision models. Interestingly, without explicit enforcement during training, VLMs trained with DepthLM naturally avoids over-smoothing, having much fewer flying points at boundary regions than pure vision models. The simplicity of DepthLM also enables a single VLM to cover various 3D tasks beyond metric depth. Our code and model will be released at the link below.