AD-L-JEPA: Self-Supervised Spatial World Models with Joint Embedding Predictive Architecture for Autonomous Driving with LiDAR Data

📅 2025-01-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the heavy reliance on labeled data and weak world modeling capability in LiDAR-based autonomous driving perception, this paper proposes the first LiDAR-native Joint Embedding Predictive Architecture (JEPA) framework for self-supervised pretraining. Instead of mask reconstruction or hand-crafted positive/negative sample construction, our method directly predicts semantic embeddings in the Bird’s Eye View (BEV) space—enabling non-generative, non-contrastive, label-free world model learning. The core innovation lies in a LiDAR-native JEPA paradigm that integrates point cloud encoding with end-to-end BEV feature learning, facilitating efficient transfer to downstream 3D detection tasks. Experiments demonstrate significant improvements over state-of-the-art methods—including Occupancy-MAE and ALSO—on benchmarks such as nuScenes, achieving higher label efficiency and cross-scene generalization. Both qualitative and quantitative analyses confirm the strong discriminability and spatial consistency of the learned embeddings.

Technology Category

Application Category

📝 Abstract
As opposed to human drivers, current autonomous driving systems still require vast amounts of labeled data to train. Recently, world models have been proposed to simultaneously enhance autonomous driving capabilities by improving the way these systems understand complex real-world environments and reduce their data demands via self-supervised pre-training. In this paper, we present AD-L-JEPA (aka Autonomous Driving with LiDAR data via a Joint Embedding Predictive Architecture), a novel self-supervised pre-training framework for autonomous driving with LiDAR data that, as opposed to existing methods, is neither generative nor contrastive. Our method learns spatial world models with a joint embedding predictive architecture. Instead of explicitly generating masked unknown regions, our self-supervised world models predict Bird's Eye View (BEV) embeddings to represent the diverse nature of autonomous driving scenes. Our approach furthermore eliminates the need to manually create positive and negative pairs, as is the case in contrastive learning. AD-L-JEPA leads to simpler implementation and enhanced learned representations. We qualitatively and quantitatively demonstrate high-quality of embeddings learned with AD-L-JEPA. We furthermore evaluate the accuracy and label efficiency of AD-L-JEPA on popular downstream tasks such as LiDAR 3D object detection and associated transfer learning. Our experimental evaluation demonstrates that AD-L-JEPA is a plausible approach for self-supervised pre-training in autonomous driving applications and is the best available approach outperforming SOTA, including most recently proposed Occupancy-MAE [1] and ALSO [2]. The source code of AD-L-JEPA is available at https://github.com/HaoranZhuExplorer/AD-L-JEPA-Release.
Problem

Research questions and friction points this paper is trying to address.

Autonomous Driving
World Model Learning
LiDAR Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

AD-L-JEPA
LiDAR data
3D object detection