π€ AI Summary
Current autonomous driving prediction and planning methods predominantly rely on imitation learning, lacking joint modeling of scene semantics, interactive intentions, and navigation goals. To address this, we propose Plan-MAEβthe first masked autoencoder (MAE) pretraining framework explicitly designed for joint prediction and planning modeling. Plan-MAE unifies reconstruction tasks across road topology, agent trajectories, navigation routes, and local sub-plans within a single architecture. Leveraging multi-task self-supervised pretraining and contextual feature fusion, it enables end-to-end, integrated prediction and planning. Evaluated on large-scale benchmarks, Plan-MAE achieves significant improvements in planning success rate and trajectory plausibility, surpassing state-of-the-art methods across key metrics. Our results empirically validate the effectiveness and generalization capability of masked reconstruction-based pretraining for learning-based motion planners.
π Abstract
Predicting the future of surrounding agents and accordingly planning a safe, goal-directed trajectory are crucial for automated vehicles. Current methods typically rely on imitation learning to optimize metrics against the ground truth, often overlooking how scene understanding could enable more holistic trajectories. In this paper, we propose Plan-MAE, a unified pretraining framework for prediction and planning that capitalizes on masked autoencoders. Plan-MAE fuses critical contextual understanding via three dedicated tasks: reconstructing masked road networks to learn spatial correlations, agent trajectories to model social interactions, and navigation routes to capture destination intents. To further align vehicle dynamics and safety constraints, we incorporate a local sub-planning task predicting the ego-vehicle's near-term trajectory segment conditioned on earlier segment. This pretrained model is subsequently fine-tuned on downstream tasks to jointly generate the prediction and planning trajectories. Experiments on large-scale datasets demonstrate that Plan-MAE outperforms current methods on the planning metrics by a large margin and can serve as an important pre-training step for learning-based motion planner.