MVTrajecter: Multi-View Pedestrian Tracking with Trajectory Motion Cost and Trajectory Appearance Cost

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-view pedestrian tracking (MVPT) requires robust cross-view association under a bird’s-eye view, yet existing end-to-end methods exploit only the current frame and a single historical frame, neglecting longer-term trajectory information. To address this, we propose MVTrajecter: an end-to-end deep framework that— for the first time—unifies multi-timestamp trajectory motion modeling and appearance matching within an attention-driven temporal association mechanism. It jointly optimizes trajectory-level motion and appearance costs, while explicitly modeling cross-temporal and cross-view dependencies via attention. MVTrajecter achieves significant improvements over state-of-the-art methods on multiple mainstream benchmarks. Ablation studies validate the critical contributions of multi-moment trajectory modeling and attention-based temporal fusion. Our work establishes a more robust and learnable association paradigm for MVPT.

Technology Category

Application Category

📝 Abstract
Multi-View Pedestrian Tracking (MVPT) aims to track pedestrians in the form of a bird's eye view occupancy map from multi-view videos. End-to-end methods that detect and associate pedestrians within one model have shown great progress in MVPT. The motion and appearance information of pedestrians is important for the association, but previous end-to-end MVPT methods rely only on the current and its single adjacent past timestamp, discarding the past trajectories before that. This paper proposes a novel end-to-end MVPT method called Multi-View Trajectory Tracker (MVTrajecter) that utilizes information from multiple timestamps in past trajectories for robust association. MVTrajecter introduces trajectory motion cost and trajectory appearance cost to effectively incorporate motion and appearance information, respectively. These costs calculate which pedestrians at the current and each past timestamp are likely identical based on the information between those timestamps. Even if a current pedestrian could be associated with a false pedestrian at some past timestamp, these costs enable the model to associate that current pedestrian with the correct past trajectory based on other past timestamps. In addition, MVTrajecter effectively captures the relationships between multiple timestamps leveraging the attention mechanism. Extensive experiments demonstrate the effectiveness of each component in MVTrajecter and show that it outperforms the previous state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

MVPT tracks pedestrians from multi-view videos using bird's eye view
Previous methods ignore historical trajectory data beyond immediate timestamps
Proposes trajectory motion and appearance costs for robust association
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes multiple past timestamps for robust association
Introduces trajectory motion and appearance cost metrics
Leverages attention mechanism for timestamp relationship capture
🔎 Similar Papers
No similar papers found.