MASAR: Motion-Appearance Synergy Refinement for Joint Detection and Trajectory Forecasting

📅 2026-02-13
📈 Citations: 0
Influential: 0
📄 PDF

Technology Category

Application Category

📝 Abstract
Classical autonomous driving systems connect perception and prediction modules via hand-crafted bounding-box interfaces, limiting information flow and propagating errors to downstream tasks. Recent research aims to develop end-to-end models that jointly address perception and prediction; however, they often fail to fully exploit the synergy between appearance and motion cues, relying mainly on short-term visual features. We follow the idea of"looking backward to look forward", and propose MASAR, a novel fully differentiable framework for joint 3D detection and trajectory forecasting compatible with any transformer-based 3D detector. MASAR employs an object-centric spatio-temporal mechanism that jointly encodes appearance and motion features. By predicting past trajectories and refining them using guidance from appearance cues, MASAR captures long-term temporal dependencies that enhance future trajectory forecasting. Experiments conducted on the nuScenes dataset demonstrate MASAR's effectiveness, showing improvements of over 20% in minADE and minFDE while maintaining robust detection performance. Code and models are available at https://github.com/aminmed/MASAR.
Problem

Research questions and friction points this paper is trying to address.

joint detection and forecasting
motion-appearance synergy
temporal dependencies
autonomous driving perception
trajectory prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Motion-Appearance Synergy
Joint Detection and Forecasting
Spatio-Temporal Refinement
End-to-End Autonomous Driving
Trajectory Forecasting