🤖 AI Summary
This work addresses domain shift in cross-platform 3D object detection by proposing a domain adaptation method built upon PVRCNN++. The approach integrates domain-specific data augmentation with a confidence-threshold-guided pseudo-labeling self-training strategy to effectively mitigate distributional discrepancies between source and target domains. While preserving the advantages of joint point-cloud and voxel feature representations, the method significantly enhances model generalization to unseen LiDAR platforms. In the RoboSense2025 Challenge, the proposed solution achieved third place: in Phase-1, it attained a 3D AP of 62.67% for the Car class on the target domain; in Phase-2, it reached 58.76% for Car and 49.81% for Pedestrian classes, respectively.
📝 Abstract
This technical report represents the award-winning solution to the Cross-platform 3D Object Detection task in the RoboSense2025 Challenge. Our approach is built upon PVRCNN++, an efficient 3D object detection framework that effectively integrates point-based and voxel-based features. On top of this foundation, we improve cross-platform generalization by narrowing domain gaps through tailored data augmentation and a self-training strategy with pseudo-labels. These enhancements enabled our approach to secure the 3rd place in the challenge, achieving a 3D AP of 62.67% for the Car category on the phase-1 target domain, and 58.76% and 49.81% for Car and Pedestrian categories respectively on the phase-2 target domain.