Event-aided Semantic Scene Completion

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Semantic scene completion (SSC) on RGB images suffers severe performance degradation under motion blur, low-light conditions, and adverse weather. To address this, we propose an event-enhanced SSC framework that leverages the high dynamic range and millisecond-latency advantages of event cameras. Our contributions are threefold: (1) We introduce DSEC-SSC—the first real-world, event-augmented SSC benchmark; (2) We design a visibility-aware 4D dynamic annotation pipeline to generate accurate spatiotemporal voxel labels; (3) We propose the Event-assisted Lifting Module (ELM), enabling efficient alignment and fusion of heterogeneous RGB and event features within 3D voxel space, compatible with both LSS and Transformer-based SSC architectures. Experiments on SemanticKITTI-C—covering five realistic degradation scenarios—demonstrate up to a 52.5% improvement in mIoU, with particularly substantial gains under motion blur and extreme weather conditions. The code and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Autonomous driving systems rely on robust 3D scene understanding. Recent advances in Semantic Scene Completion (SSC) for autonomous driving underscore the limitations of RGB-based approaches, which struggle under motion blur, poor lighting, and adverse weather. Event cameras, offering high dynamic range and low latency, address these challenges by providing asynchronous data that complements RGB inputs. We present DSEC-SSC, the first real-world benchmark specifically designed for event-aided SSC, which includes a novel 4D labeling pipeline for generating dense, visibility-aware labels that adapt dynamically to object motion. Our proposed RGB-Event fusion framework, EvSSC, introduces an Event-aided Lifting Module (ELM) that effectively bridges 2D RGB-Event features to 3D space, enhancing view transformation and the robustness of 3D volume construction across SSC models. Extensive experiments on DSEC-SSC and simulated SemanticKITTI-E demonstrate that EvSSC is adaptable to both transformer-based and LSS-based SSC architectures. Notably, evaluations on SemanticKITTI-C demonstrate that EvSSC achieves consistently improved prediction accuracy across five degradation modes and both In-domain and Out-of-domain settings, achieving up to a 52.5% relative improvement in mIoU when the image sensor partially fails. Additionally, we quantitatively and qualitatively validate the superiority of EvSSC under motion blur and extreme weather conditions, where autonomous driving is challenged. The established datasets and our codebase will be made publicly at https://github.com/Pandapan01/EvSSC.
Problem

Research questions and friction points this paper is trying to address.

Enhance 3D scene understanding for autonomous driving
Overcome RGB-based limitations in adverse conditions
Integrate event cameras for robust SSC solutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Event-aided Lifting Module
RGB-Event fusion framework
4D labeling pipeline
🔎 Similar Papers
No similar papers found.
Shangwei Guo
Shangwei Guo
Chongqing University
AI System SecurityData Privacy
H
Hao Shi
State Key Laboratory of Extreme Photonics and Instrumentation, Zhejiang University, Hangzhou 310027, China
S
Song Wang
College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
X
Xiaoting Yin
State Key Laboratory of Extreme Photonics and Instrumentation, Zhejiang University, Hangzhou 310027, China
Kailun Yang
Kailun Yang
Professor. School of Artificial Intelligence and Robotics, Hunan University (HNU); KIT; UAH; ZJU
Computer VisionComputational OpticsIntelligent VehiclesAutonomous DrivingRobotics
Kaiwei Wang
Kaiwei Wang
Professor. Zhejiang University
Optical MeasurementMachine VisionAssistive TechnologyIntelligent Transportation Systems