🤖 AI Summary
Addressing three key challenges in out-of-distribution (OOD) detection for road-scene semantic segmentation—dense occlusion among objects, small distant instances, and foreground-dominant large objects—this paper introduces the first Chain-of-Thought (CoT)-based visual reasoning framework. The method synergistically leverages the knowledge and logical reasoning capabilities of multimodal large language models (e.g., GPT-4), guided by structured prompt engineering to decompose image understanding into interpretable, stepwise semantic analysis and anomaly pattern identification. This enhances both OOD discrimination accuracy and model transparency. Evaluated on standard benchmarks and a newly curated challenging subset, RoadAnomaly, our approach achieves new state-of-the-art performance, improving mean Average Precision (mAP) by 5.2% over prior methods while demonstrating superior robustness and generalization across diverse distribution shifts.
📝 Abstract
Effective Out-of-Distribution (OOD) detection is criti-cal for ensuring the reliability of semantic segmentation models, particularly in complex road environments where safety and accuracy are paramount. Despite recent advancements in large language models (LLMs), notably GPT-4, which significantly enhanced multimodal reasoning through Chain-of-Thought (CoT) prompting, the application of CoT-based visual reasoning for OOD semantic segmentation remains largely unexplored. In this paper, through extensive analyses of the road scene anomalies, we identify three challenging scenarios where current state-of-the-art OOD segmentation methods consistently struggle: (1) densely packed and overlapping objects, (2) distant scenes with small objects, and (3) large foreground-dominant objects. To address the presented challenges, we propose a novel CoT-based framework targeting OOD detection in road anomaly scenes. Our method leverages the extensive knowledge and reasoning capabilities of foundation models, such as GPT-4, to enhance OOD detection through improved image understanding and prompt-based reasoning aligned with observed problematic scene attributes. Extensive experiments show that our framework consistently outperforms state-of-the-art methods on both standard benchmarks and our newly defined challenging subset of the RoadAnomaly dataset, offering a robust and interpretable solution for OOD semantic segmentation in complex driving environments.