🤖 AI Summary
This work addresses the challenges of 3D perception in infrastructure-based “outside-in” multi-camera systems, where heterogeneous camera layouts and extreme occlusions severely degrade performance. To this end, the authors propose a unified 3D perception framework built upon the Sparse4D architecture, integrating geometric priors in world coordinates with occlusion-aware ReID embeddings. They further enhance appearance invariance through NVIDIA COSMOS–based generative Sim2Real augmentation, eliminating the need for manual annotations. An efficient TensorRT plugin leveraging Multi-Scale Deformable Attention (MSDA) is developed to enable high-speed inference. The proposed method achieves state-of-the-art performance on the AI City Challenge 2025 with an HOTA score of 45.22, delivering a 2.15× speedup in inference throughput and supporting concurrent processing of over 64 camera streams on a single Blackwell GPU.
📝 Abstract
Accurate 3D object perception and multi-target multi-camera (MTMC) tracking are fundamental for the digital transformation of industrial infrastructure. However, transitioning"inside-out"autonomous driving models to"outside-in"static camera networks presents significant challenges due to heterogeneous camera placements and extreme occlusion. In this paper, we present an adapted Sparse4D framework specifically optimized for large-scale infrastructure environments. Our system leverages absolute world-coordinate geometric priors and introduces an occlusion-aware ReID embedding module to maintain identity stability across distributed sensor networks. To bridge the Sim2Real domain gap without manual labeling, we employ a generative data augmentation strategy using the NVIDIA COSMOS framework, creating diverse environmental styles that enhance the model's appearance-invariance. Evaluated on the AI City Challenge 2025 benchmark, our camera-only framework achieves a state-of-the-art HOTA of $45.22$. Furthermore, we address real-time deployment constraints by developing an optimized TensorRT plugin for Multi-Scale Deformable Aggregation (MSDA). Our hardware-accelerated implementation achieves a $2.15\times$ speedup on modern GPU architectures, enabling a single Blackwell-class GPU to support over 64 concurrent camera streams.