🤖 AI Summary
This work addresses the insufficient reliability of autonomous driving road perception in complex dynamic environments. We propose an end-to-end perception framework integrating deep learning with a multimodal large language model (MLLM). Our method introduces a lightweight MLLM instruction-tuning paradigm—requiring no pretraining—and synergistically combines CNN-based semantic segmentation with polynomial curve fitting for robust lane detection. Crucially, we pioneer a semantic reasoning mechanism to handle occluded scenarios such as rain, fog, and road surface degradation. Experiments demonstrate: 99.8% accuracy in traffic sign recognition; lane detection accuracy of 99.6% under clear conditions and 93.0% at night; improved reasoning accuracy of 88.4% under rainy conditions and 95.6% under road degradation. The system achieves a Qualitative Novelty Score (QNS) of 82.83% and a Failure Recovery Metric (FRM) of 53.87%, significantly enhancing safety and generalization capability in real-world complex scenarios.
📝 Abstract
Autonomous vehicles (AVs) require reliable traffic sign recognition and robust lane detection capabilities to ensure safe navigation in complex and dynamic environments. This paper introduces an integrated approach combining advanced deep learning techniques and Multimodal Large Language Models (MLLMs) for comprehensive road perception. For traffic sign recognition, we systematically evaluate ResNet-50, YOLOv8, and RT-DETR, achieving state-of-the-art performance of 99.8% with ResNet-50, 98.0% accuracy with YOLOv8, and achieved 96.6% accuracy in RT-DETR despite its higher computational complexity. For lane detection, we propose a CNN-based segmentation method enhanced by polynomial curve fitting, which delivers high accuracy under favorable conditions. Furthermore, we introduce a lightweight, Multimodal, LLM-based framework that directly undergoes instruction tuning using small yet diverse datasets, eliminating the need for initial pretraining. This framework effectively handles various lane types, complex intersections, and merging zones, significantly enhancing lane detection reliability by reasoning under adverse conditions. Despite constraints in available training resources, our multimodal approach demonstrates advanced reasoning capabilities, achieving a Frame Overall Accuracy (FRM) of 53.87%, a Question Overall Accuracy (QNS) of 82.83%, lane detection accuracies of 99.6% in clear conditions and 93.0% at night, and robust performance in reasoning about lane invisibility due to rain (88.4%) or road degradation (95.6%). The proposed comprehensive framework markedly enhances AV perception reliability, thus contributing significantly to safer autonomous driving across diverse and challenging road scenarios.