V2XPnP: Vehicle-to-Everything Spatio-Temporal Fusion for Multi-Agent Perception and Prediction

πŸ“… 2024-12-02
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 2
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the limitations of single-vehicle perception in V2X scenarios, this paper proposes the first end-to-end multi-agent spatiotemporal fusion framework, moving beyond prevailing single-frame collaborative perception paradigms by jointly modeling inter-vehicle, inter-frame temporal, and high-definition map information. Methodologically, we design the V2XPnP intermediate-fusion architecture and an adaptive one-/multi-step communication strategy, and introduce the first V2X sequential dataset supporting all collaboration modesβ€”the V2XPnP Sequential Dataset. A unified Transformer backbone enables joint cross-agent, cross-frame, and cross-modal modeling. Experiments demonstrate state-of-the-art performance on both collaborative perception and motion prediction tasks. The code and dataset are publicly released to advance research in temporal V2X cooperative perception.

Technology Category

Application Category

πŸ“ Abstract
Vehicle-to-everything (V2X) technologies offer a promising paradigm to mitigate the limitations of constrained observability in single-vehicle systems. Prior work primarily focuses on single-frame cooperative perception, which fuses agents' information across different spatial locations but ignores temporal cues and temporal tasks (e.g., temporal perception and prediction). In this paper, we focus on the spatio-temporal fusion in V2X scenarios and design one-step and multi-step communication strategies (when to transmit) as well as examine their integration with three fusion strategies - early, late, and intermediate (what to transmit), providing comprehensive benchmarks with 11 fusion models (how to fuse). Furthermore, we propose V2XPnP, a novel intermediate fusion framework within one-step communication for end-to-end perception and prediction. Our framework employs a unified Transformer-based architecture to effectively model complex spatio-temporal relationships across multiple agents, frames, and high-definition map. Moreover, we introduce the V2XPnP Sequential Dataset that supports all V2X collaboration modes and addresses the limitations of existing real-world datasets, which are restricted to single-frame or single-mode cooperation. Extensive experiments demonstrate our framework outperforms state-of-the-art methods in both perception and prediction tasks. The codebase and dataset will be released to facilitate future V2X research.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations of single-vehicle observability using V2X technologies.
Develops spatio-temporal fusion strategies for multi-agent perception and prediction.
Introduces V2XPnP framework and dataset for enhanced V2X collaboration.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatio-temporal fusion in V2X scenarios
Transformer-based architecture for multi-agent modeling
V2XPnP Sequential Dataset for diverse collaboration modes
πŸ”Ž Similar Papers
No similar papers found.