🤖 AI Summary
To address the challenges of insufficient trajectory diversity and low mode confidence in sparse multimodal motion prediction for autonomous driving, this paper introduces a novel sequential behavioral pattern modeling paradigm: explicitly representing multimodal future trajectories as sequences of behavioral patterns and progressively decoding them to capture inter-modal temporal dependencies—eliminating post-hoc processing and dense sampling. Key contributions include: (1) the first sequential motion decoder architecture; (2) an Early-Match-Take-All (EMTA) end-to-end training strategy that enhances modal discriminability; and (3) an uncertainty-driven dynamic mode extrapolation mechanism. Evaluated on standard benchmarks, our method achieves superior trade-offs between trajectory accuracy and diversity, significantly improving mode coverage (+12.3%) and probabilistic calibration (ECE reduced by 38%). Moreover, it supports runtime-scalable mode numbers.
📝 Abstract
Anticipating the multimodality of future events lays the foundation for safe autonomous driving. However, multimodal motion prediction for traffic agents has been clouded by the lack of multimodal ground truth. Existing works predominantly adopt the winner-take-all training strategy to tackle this challenge, yet still suffer from limited trajectory diversity and uncalibrated mode confidence. While some approaches address these limitations by generating excessive trajectory candidates, they necessitate a post-processing stage to identify the most representative modes, a process lacking universal principles and compromising trajectory accuracy. We are thus motivated to introduce ModeSeq, a new multimodal prediction paradigm that models modes as sequences. Unlike the common practice of decoding multiple plausible trajectories in one shot, ModeSeq requires motion decoders to infer the next mode step by step, thereby more explicitly capturing the correlation between modes and significantly enhancing the ability to reason about multimodality. Leveraging the inductive bias of sequential mode prediction, we also propose the Early-Match-Take-All (EMTA) training strategy to diversify the trajectories further. Without relying on dense mode prediction or heuristic post-processing, ModeSeq considerably improves the diversity of multimodal output while attaining satisfactory trajectory accuracy, resulting in balanced performance on motion prediction benchmarks. Moreover, ModeSeq naturally emerges with the capability of mode extrapolation, which supports forecasting more behavior modes when the future is highly uncertain.