Self-supervised Pretraining for Integrated Prediction and Planning of Automated Vehicles

πŸ“… 2025-07-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current autonomous driving prediction and planning methods predominantly rely on imitation learning, lacking joint modeling of scene semantics, interactive intentions, and navigation goals. To address this, we propose Plan-MAEβ€”the first masked autoencoder (MAE) pretraining framework explicitly designed for joint prediction and planning modeling. Plan-MAE unifies reconstruction tasks across road topology, agent trajectories, navigation routes, and local sub-plans within a single architecture. Leveraging multi-task self-supervised pretraining and contextual feature fusion, it enables end-to-end, integrated prediction and planning. Evaluated on large-scale benchmarks, Plan-MAE achieves significant improvements in planning success rate and trajectory plausibility, surpassing state-of-the-art methods across key metrics. Our results empirically validate the effectiveness and generalization capability of masked reconstruction-based pretraining for learning-based motion planners.

Technology Category

Application Category

πŸ“ Abstract
Predicting the future of surrounding agents and accordingly planning a safe, goal-directed trajectory are crucial for automated vehicles. Current methods typically rely on imitation learning to optimize metrics against the ground truth, often overlooking how scene understanding could enable more holistic trajectories. In this paper, we propose Plan-MAE, a unified pretraining framework for prediction and planning that capitalizes on masked autoencoders. Plan-MAE fuses critical contextual understanding via three dedicated tasks: reconstructing masked road networks to learn spatial correlations, agent trajectories to model social interactions, and navigation routes to capture destination intents. To further align vehicle dynamics and safety constraints, we incorporate a local sub-planning task predicting the ego-vehicle's near-term trajectory segment conditioned on earlier segment. This pretrained model is subsequently fine-tuned on downstream tasks to jointly generate the prediction and planning trajectories. Experiments on large-scale datasets demonstrate that Plan-MAE outperforms current methods on the planning metrics by a large margin and can serve as an important pre-training step for learning-based motion planner.
Problem

Research questions and friction points this paper is trying to address.

Predicting future agent behaviors for automated vehicles
Planning safe trajectories using scene understanding
Unifying prediction and planning via pretraining framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses masked autoencoders for unified prediction and planning
Reconstructs road networks, agent trajectories, and navigation routes
Incorporates local sub-planning for vehicle dynamics alignment
πŸ”Ž Similar Papers
No similar papers found.