🤖 AI Summary
Fog severely degrades perception performance in autonomous driving, yet existing dehazing methods often improve image fidelity without corresponding gains in downstream detection/segmentation tasks; moreover, most evaluations rely on synthetic data, raising concerns about generalizability. This paper introduces the first perception-oriented, transparent dehazing benchmark, systematically evaluating traditional filters, deep dehazing networks, cascade strategies (filter↔model), and prompt-based vision-language models (VLMs) as both image editors and quality evaluators. Key contributions include: (1) the first use of VLMs for dehazing assessment, revealing strong correlation (r > 0.92) between VLM scores and detection mAP; and (2) empirical analysis on Foggy Cityscapes that delineates method-specific applicability boundaries, synergistic benefits, and degradation conditions—establishing a reproducible, interpretable evaluation paradigm for perception-driven dehazing.
📝 Abstract
Autonomous driving perception systems are particularly vulnerable in foggy conditions, where light scattering reduces contrast and obscures fine details critical for safe operation. While numerous defogging methods exist-from handcrafted filters to learned restoration models-improvements in image fidelity do not consistently translate into better downstream detection and segmentation. Moreover, prior evaluations often rely on synthetic data, leaving questions about real-world transferability. We present a structured empirical study that benchmarks a comprehensive set of pipelines, including (i) classical filters, (ii) modern defogging networks, (iii) chained variants (filter$
ightarrow$model, model$
ightarrow$filter), and (iv) prompt-driven visual--language image editing models (VLM) applied directly to foggy images. Using Foggy Cityscapes, we assess both image quality and downstream performance on object detection (mAP) and segmentation (PQ, RQ, SQ). Our analysis reveals when defogging helps, when chaining yields synergy or degradation, and how VLM-based editors compare to dedicated approaches. In addition, we evaluate qualitative rubric-based scores from a VLM judge and quantify their alignment with task metrics, showing strong correlations with mAP. Together, these results establish a transparent, task-oriented benchmark for defogging methods and highlight the conditions under which preprocessing genuinely improves autonomous perception in adverse weather.