🤖 AI Summary
To address the degradation of depth estimation performance in autonomous driving under adverse weather conditions (e.g., rain, fog, nighttime), this paper proposes a lightweight multimodal adaptation framework that enables robust cross-weather transfer from source to target domains—without synthesizing target-domain images or employing complex feature augmentation. Methodologically, it integrates CLIP’s text encoder, diffusion-based visual features, and a cross-modal alignment loss. Key contributions include: (1) a novel prompt-driven domain alignment mechanism that explicitly models the correlation between weather semantics and depth representations via vision–language contrastive learning; and (2) the first application of LoRA to cross-weather depth estimation, enabling language-guided visual representation calibration without requiring pre-aligned multimodal features. Evaluated on nuScenes and Oxford RobotCar, the method achieves state-of-the-art performance, significantly improving both accuracy and generalization efficiency under challenging weather conditions.
📝 Abstract
The autonomous driving community is increasingly focused on addressing corner case problems, particularly those related to ensuring driving safety under adverse conditions (e.g., nighttime, fog, rain). To this end, the task of Adverse Condition Depth Estimation (ACDE) has gained significant attention. Previous approaches in ACDE have primarily relied on generative models, which necessitate additional target images to convert the sunny condition into adverse weather, or learnable parameters for feature augmentation to adapt domain gaps, resulting in increased model complexity and tuning efforts. Furthermore, unlike CLIP-based methods where textual and visual features have been pre-aligned, depth estimation models lack sufficient alignment between multimodal features, hindering coherent understanding under adverse conditions. To address these limitations, we propose Multi-Modality Driven LoRA (MMD-LoRA), which leverages low-rank adaptation matrices for efficient fine-tuning from source-domain to target-domain. It consists of two core components: Prompt Driven Domain Alignment (PDDA) and Visual-Text Consistent Contrastive Learning(VTCCL). During PDDA, the image encoder with MMD-LoRA generates target-domain visual representations, supervised by alignment loss that the source-target difference between language and image should be equal. Meanwhile, VTCCL bridges the gap between textual features from CLIP and visual features from diffusion model, pushing apart different weather representations (vision and text) and bringing together similar ones. Through extensive experiments, the proposed method achieves state-of-the-art performance on the nuScenes and Oxford RobotCar datasets, underscoring robustness and efficiency in adapting to varied adverse environments.