🤖 AI Summary
Existing motion prediction methods for autonomous driving often treat discrete scenarios in isolation, neglecting the temporal continuity and historical context inherent in real-world driving. To address this limitation, this work proposes PanguMotion, a novel framework that, for the first time, integrates the Transformer module from the Pangu-1B large language model as a feature enhancement component for continuous driving trajectory prediction. By leveraging the RealMotion data reorganization strategy, the method constructs temporally coherent scene sequences on the Argoverse 2 dataset. This approach substantially strengthens temporal modeling capabilities, leading to significant improvements in both accuracy and robustness of trajectory prediction.
📝 Abstract
Motion forecasting is a core task in autonomous driving systems, aiming to accurately predict the future trajectories of surrounding agents to ensure driving safety. Existing methods typically process discrete driving scenes independently, neglecting the temporal continuity and historical context correlations inherent in real-world driving environments. This paper proposes PanguMotion, a motion forecasting framework for continuous driving scenarios that integrates Transformer blocks from the Pangu-1B large language model as feature enhancement modules into autonomous driving motion prediction architectures. We conduct experiments on the Argoverse 2 datasets processed by the RealMotion data reorganization strategy, transforming each independent scene into a continuous sequence to mimic real-world driving scenarios.