TurboTrain: Towards Efficient and Balanced Multi-Task Learning for Multi-Agent Perception and Prediction

πŸ“… 2025-08-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing multi-agent perception and prediction frameworks suffer from complex multi-stage training pipelines, labor-intensive hyperparameter tuning, and performance imbalance caused by inter-task gradient conflicts. To address these challenges, this paper proposes TurboTrainβ€”a unified, end-to-end efficient training framework. Its core innovations are: (1) multi-agent spatiotemporal masked reconstruction pretraining to explicitly model cross-agent spatiotemporal dependencies; and (2) a gradient-conflict-aware multi-task balancing mechanism that automatically aligns optimization directions between detection and trajectory prediction. TurboTrain eliminates manual intervention, enabling stable, rapid convergence in joint training. Evaluated on the real-world cooperative driving dataset V2XPnP-Seq, TurboTrain outperforms state-of-the-art methods, achieving simultaneous improvements in perception accuracy and trajectory prediction consistency, while reducing total training time by 32%.

Technology Category

Application Category

πŸ“ Abstract
End-to-end training of multi-agent systems offers significant advantages in improving multi-task performance. However, training such models remains challenging and requires extensive manual design and monitoring. In this work, we introduce TurboTrain, a novel and efficient training framework for multi-agent perception and prediction. TurboTrain comprises two key components: a multi-agent spatiotemporal pretraining scheme based on masked reconstruction learning and a balanced multi-task learning strategy based on gradient conflict suppression. By streamlining the training process, our framework eliminates the need for manually designing and tuning complex multi-stage training pipelines, substantially reducing training time and improving performance. We evaluate TurboTrain on a real-world cooperative driving dataset, V2XPnP-Seq, and demonstrate that it further improves the performance of state-of-the-art multi-agent perception and prediction models. Our results highlight that pretraining effectively captures spatiotemporal multi-agent features and significantly benefits downstream tasks. Moreover, the proposed balanced multi-task learning strategy enhances detection and prediction.
Problem

Research questions and friction points this paper is trying to address.

Efficient training for multi-agent perception and prediction
Balancing multi-task learning to reduce gradient conflicts
Eliminating manual design of complex multi-stage pipelines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent spatiotemporal pretraining with masked reconstruction
Balanced multi-task learning via gradient conflict suppression
End-to-end framework eliminating manual pipeline design
πŸ”Ž Similar Papers
No similar papers found.