🤖 AI Summary
To address degraded LiDAR/camera tracking performance under adverse weather, low vertical accuracy and weak motion modeling capability of conventional radar tracking, and the inability of Kalman filtering to adapt to abrupt maneuvers, this paper proposes an end-to-end trajectory prediction and tracking framework based on 4D radar. Our method systematically integrates Bayesian approximate inference across the entire pipeline—including 4D radar detection, data association, and state estimation—for the first time. We design a two-stage Doppler-enhanced data association mechanism and introduce a Transformer-based architecture to model nonlinear vehicle dynamics, thereby overcoming the limitations of linear/Gaussian assumptions. Evaluated on the K-Radar dataset, our approach achieves a 5.7% improvement in AMOTA and significantly enhances tracking robustness and vertical accuracy under rain, snow, and fog conditions.
📝 Abstract
Accurate 3D multi-object tracking (MOT) is vital for autonomous vehicles, yet LiDAR and camera-based methods degrade in adverse weather. Meanwhile, Radar-based solutions remain robust but often suffer from limited vertical resolution and simplistic motion models. Existing Kalman filter-based approaches also rely on fixed noise covariance, hampering adaptability when objects make sudden maneuvers. We propose Bayes-4DRTrack, a 4D Radar-based MOT framework that adopts a transformer-based motion prediction network to capture nonlinear motion dynamics and employs Bayesian approximation in both detection and prediction steps. Moreover, our two-stage data association leverages Doppler measurements to better distinguish closely spaced targets. Evaluated on the K-Radar dataset (including adverse weather scenarios), Bayes-4DRTrack demonstrates a 5.7% gain in Average Multi-Object Tracking Accuracy (AMOTA) over methods with traditional motion models and fixed noise covariance. These results showcase enhanced robustness and accuracy in demanding, real-world conditions.