🤖 AI Summary
This work addresses the degraded performance of current vision-language models (VLMs) under adverse weather conditions—such as rain, snow, and fog—where visual corruption impairs their ability to perform reliable reasoning-based segmentation. To systematically evaluate VLM robustness in such scenarios, we introduce WeatherReasonSeg, a novel benchmark that integrates controllable synthetic weather effects with real-world adverse-weather scenes. We further propose mask-guided prompting for large language models to generate semantically coherent queries. Through an evaluation framework spanning five reasoning dimensions, our experiments reveal a monotonic decline in VLM performance with increasing weather severity and uncover distinct vulnerability patterns across different weather types. This study establishes a new benchmark and analytical perspective for advancing the reliability of VLMs in complex environmental conditions.
📝 Abstract
Existing vision-language models (VLMs) have demonstrated impressive performance in reasoning-based segmentation. However, current benchmarks are primarily constructed from high-quality images captured under idealized conditions. This raises a critical question: when visual cues are severely degraded by adverse weather conditions such as rain, snow, or fog, can VLMs sustain reliable reasoning segmentation capabilities? In response to this challenge, we introduce WeatherReasonSeg, a benchmark designed to evaluate VLM performance in reasoning-based segmentation under adverse weather conditions. It consists of two complementary components. First, we construct a controllable reasoning dataset by applying synthetic weather with varying severity levels to existing segmentation datasets, enabling fine-grained robustness analysis. Second, to capture real-world complexity, we curate a real-world adverse-weather reasoning segmentation dataset with semantically consistent queries generated via mask-guided LLM prompting. We further broaden the evaluation scope across five reasoning dimensions, including functionality, application scenarios, structural attributes, interactions, and requirement matching. Extensive experiments across diverse VLMs reveal two key findings: (1) VLM performance degrades monotonically with increasing weather severity, and (2) different weather types induce distinct vulnerability patterns. We hope WeatherReasonSeg will serve as a foundation for advancing robust, weather-aware reasoning.