OmniHD-Scenes: A Next-Generation Multimodal Dataset for Autonomous Driving, submitted to IEEE T-PAMI
Doracamom: Joint 3D Detection and Occupancy Prediction with Multi-view 4D Radars and Cameras for Omnidirectional Perception, submitted to IEEE T-CSVT
RCFusion: Fusing 4-D Radar and Camera With Bird’s-Eye View Features for 3-D Object Detection, published in IEEE Transactions on Instrumentation and Measurement
TJ4DRadSet: A 4D Radar Dataset for Autonomous Driving, presented at 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)
MetaOcc: Surround-View 4D Radar and Camera Fusion Framework for 3D Occupancy Prediction with Dual Training Strategies, submitted to IEEE RA-L
MS-Occ: Multi-Stage LiDAR-Camera Fusion for 3D Semantic Occupancy Prediction, submitted to IEEE RA-L
UMT-Net: A Uniform Multi-Task Network With Adaptive Task Weighting, published in IEEE Transactions on Intelligent Vehicles
SGDet3D: Semantics and Geometry Fusion for 3D Object Detection Using 4D Radar and Camera, published in IEEE Robotics and Automation Letters
TDFANet: Encoding Sequential 4D Radar Point Clouds Using Trajectory-Guided Deformable Feature Aggregation for Place Recognition, presented at ICRA 2025
Talk2PC: Enhancing 3D Visual Grounding through LiDAR and Radar Point Clouds Fusion for Autonomous Driving, submitted to Pattern Recognition
Radar Transformer: An Object Classification Network Based on 4D MMW Imaging Radar, published in Sensors 2021
4DRVO-Net: Deep 4D Radar–Visual Odometry Using Multi-Modal and Multi-Scale Adaptive Fusion, published in IEEE Transactions on Intelligent Vehicles
Background
Currently pursuing a Ph.D. in the School of Automotive Studies at Tongji University. Research interests include 3D Object Detection, Occupancy Prediction, 4D Radar Perception, Multimodal Fusion, and Data Closed-Loop. Recent research focuses on integrating LLMs with 4D imaging radar and vision fusion in an end-to-end architecture.