Out-of-Sight Trajectories: Tracking, Fusion, and Prediction

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Trajectory prediction in real-world autonomous driving scenarios is hindered by sensor field-of-view limitations, occlusions, and trajectory noise, compromising the reliability of existing methods. To address this, we propose an unsupervised vision–localization joint denoising framework. First, we introduce a novel vision–localization projection mechanism that establishes cross-modal mappings to enable trajectory denoising for targets in unobserved regions. Second, we design an enhanced unsupervised denoising module that operates without ground-truth trajectory annotations or visual references. Third, our framework tightly integrates camera calibration, multi-sensor fusion, Kalman filtering, and state-of-the-art trajectory prediction models. Extensive experiments on the Vi-Fi and JRDB benchmarks demonstrate significant improvements over prior baselines, achieving new state-of-the-art performance in both trajectory denoising and forecasting. The source code and preprocessed datasets are publicly released.

Technology Category

Application Category

📝 Abstract
Trajectory prediction is a critical task in computer vision and autonomous systems, playing a key role in autonomous driving, robotics, surveillance, and virtual reality. Existing methods often rely on complete and noise-free observational data, overlooking the challenges associated with out-of-sight objects and the inherent noise in sensor data caused by limited camera coverage, obstructions, and the absence of ground truth for denoised trajectories. These limitations pose safety risks and hinder reliable prediction in real-world scenarios. In this extended work, we present advancements in Out-of-Sight Trajectory (OST), a novel task that predicts the noise-free visual trajectories of out-of-sight objects using noisy sensor data. Building on our previous research, we broaden the scope of Out-of-Sight Trajectory Prediction (OOSTraj) to include pedestrians and vehicles, extending its applicability to autonomous driving, robotics, surveillance, and virtual reality. Our enhanced Vision-Positioning Denoising Module leverages camera calibration to establish a vision-positioning mapping, addressing the lack of visual references, while effectively denoising noisy sensor data in an unsupervised manner. Through extensive evaluations on the Vi-Fi and JRDB datasets, our approach achieves state-of-the-art performance in both trajectory denoising and prediction, significantly surpassing previous baselines. Additionally, we introduce comparisons with traditional denoising methods, such as Kalman filtering, and adapt recent trajectory prediction models to our task, providing a comprehensive benchmark. This work represents the first initiative to integrate vision-positioning projection for denoising noisy sensor trajectories of out-of-sight agents, paving the way for future advances. The code and preprocessed datasets are available at github.com/Hai-chao-Zhang/OST
Problem

Research questions and friction points this paper is trying to address.

Predicting noise-free trajectories for out-of-sight objects
Denoising sensor data without ground truth references
Extending applicability to pedestrians and autonomous systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Positioning Denoising Module for unsupervised sensor data
Camera calibration mapping addresses lack of visual references
Integrates vision-positioning projection for out-of-sight trajectories
Haichao Zhang
Haichao Zhang
Senior Research Scientist, Horizon Robotics
Embodied AIReinforcement LearningRobot Learning
Y
Yi Xu
Department of Electrical and Computer Engineering, Northeastern University, Boston, MA 02115, USA
Y
Yun Fu
Department of Electrical and Computer Engineering and the Khoury College of Computer Sciences, Northeastern University, Boston, MA 02115, USA