SafePLUG: Empowering Multimodal LLMs with Pixel-Level Insight and Temporal Grounding for Traffic Accident Understanding

📅 2025-08-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing traffic accident understanding models predominantly perform coarse-grained image- or video-level analysis, failing to capture fine-grained visual details and local components. To address this, we propose the first multimodal large language model framework integrating pixel-level semantic segmentation with temporally anchored event localization, enabling language-guided arbitrary-shape region question answering, pixel-accurate segmentation, and temporal grounding of events. Methodologically, our approach innovatively combines visual prompt learning, cross-modal pixel alignment, a temporal grounding module, and instruction-driven segmentation. To support this work, we introduce the first multimodal traffic accident dataset featuring fine-grained spatiotemporal annotations. Extensive experiments demonstrate that our framework significantly outperforms all baselines across four core tasks—region-based question answering, pixel-level segmentation, temporal event localization, and holistic accident understanding—thereby substantially enhancing fine-grained perception and reasoning in complex traffic scenarios.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) have achieved remarkable progress across a range of vision-language tasks and demonstrate strong potential for traffic accident understanding. However, existing MLLMs in this domain primarily focus on coarse-grained image-level or video-level comprehension and often struggle to handle fine-grained visual details or localized scene components, limiting their applicability in complex accident scenarios. To address these limitations, we propose SafePLUG, a novel framework that empowers MLLMs with both Pixel-Level Understanding and temporal Grounding for comprehensive traffic accident analysis. SafePLUG supports both arbitrary-shaped visual prompts for region-aware question answering and pixel-level segmentation based on language instructions, while also enabling the recognition of temporally anchored events in traffic accident scenarios. To advance the development of MLLMs for traffic accident understanding, we curate a new dataset containing multimodal question-answer pairs centered on diverse accident scenarios, with detailed pixel-level annotations and temporal event boundaries. Experimental results show that SafePLUG achieves strong performance on multiple tasks, including region-based question answering, pixel-level segmentation, temporal event localization, and accident event understanding. These capabilities lay a foundation for fine-grained understanding of complex traffic scenes, with the potential to improve driving safety and enhance situational awareness in smart transportation systems. The code, dataset, and model checkpoints will be made publicly available at: https://zihaosheng.github.io/SafePLUG
Problem

Research questions and friction points this paper is trying to address.

Enhances MLLMs for fine-grained traffic accident analysis
Addresses pixel-level and temporal event recognition gaps
Improves complex scene understanding for smart transportation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pixel-level understanding for fine details
Temporal grounding for event recognition
Arbitrary-shaped visual prompts for region-aware QA
🔎 Similar Papers
No similar papers found.