Self-Supervised Pre-training with Combined Datasets for 3D Perception in Autonomous Driving

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the reliance of 3D perception models on large-scale annotated data and their poor cross-dataset generalization in autonomous driving, this paper proposes the first cross-dataset self-supervised pretraining framework tailored for 3D perception. The framework leverages heterogeneous, unlabeled multi-source data—including images, point clouds, and BEV sequences—and introduces a prompt adapter-based domain bias correction mechanism to enable efficient domain adaptation alongside backbone network co-optimization. It supports unified pretraining across multiple downstream tasks, including 3D detection, BEV segmentation, 3D tracking, and occupancy prediction. Extensive experiments demonstrate consistent performance gains with increasing scale of unlabeled data, achieving significant improvements over state-of-the-art self-supervised and cross-domain methods on benchmarks such as nuScenes. The code will be made publicly available.

Technology Category

Application Category

📝 Abstract
The significant achievements of pre-trained models leveraging large volumes of data in the field of NLP and 2D vision inspire us to explore the potential of extensive data pre-training for 3D perception in autonomous driving. Toward this goal, this paper proposes to utilize massive unlabeled data from heterogeneous datasets to pre-train 3D perception models. We introduce a self-supervised pre-training framework that learns effective 3D representations from scratch on unlabeled data, combined with a prompt adapter based domain adaptation strategy to reduce dataset bias. The approach significantly improves model performance on downstream tasks such as 3D object detection, BEV segmentation, 3D object tracking, and occupancy prediction, and shows steady performance increase as the training data volume scales up, demonstrating the potential of continually benefit 3D perception models for autonomous driving. We will release the source code to inspire further investigations in the community.
Problem

Research questions and friction points this paper is trying to address.

Explores large-scale pre-training for 3D autonomous driving perception
Proposes self-supervised learning to reduce dataset bias in 3D models
Improves downstream tasks like 3D detection and BEV segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised pre-training for 3D perception
Combined heterogeneous datasets for training
Prompt adapter for domain adaptation
🔎 Similar Papers
No similar papers found.