CoIn3D: Revisiting Configuration-Invariant Multi-Camera 3D Object Detection

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization of multi-camera 3D object detection models to unseen camera configurations, such as variations in intrinsic and extrinsic parameters and spatial layouts. To tackle this challenge, the authors propose CoIn3D, a novel framework that systematically identifies and integrates four types of spatial priors—focal length, ground depth, ground slope, and Plücker coordinates—into the detection pipeline. CoIn3D explicitly models configuration discrepancies through Spatial-aware Feature Modulation (SFM) and Camera-aware Data Augmentation (CDA), and further introduces a training-free dynamic image synthesis strategy for novel viewpoints. Extensive experiments on NuScenes, Waymo, and Lyft benchmarks demonstrate that CoIn3D consistently and significantly improves cross-configuration detection performance across mainstream paradigms, including BEVDepth, BEVFormer, and PETR.

Technology Category

Application Category

📝 Abstract
Multi-camera 3D object detection (MC3D) has attracted increasing attention with the growing deployment of multi-sensor physical agents, such as robots and autonomous vehicles. However, MC3D models still struggle to generalize to unseen platforms with new multi-camera configurations. Current solutions simply employ a meta-camera for unified representation but lack comprehensive consideration. In this paper, we revisit this issue and identify that the devil lies in spatial prior discrepancies across source and target configurations, including different intrinsics, extrinsics, and array layouts. To address this, we propose CoIn3D, a generalizable MC3D framework that enables strong transferability from source configurations to unseen target ones. CoIn3D explicitly incorporates all identified spatial priors into both feature embedding and image observation through spatial-aware feature modulation (SFM) and camera-aware data augmentation (CDA), respectively. SFM enriches feature space by integrating four spatial representations, such as focal length, ground depth, ground gradient, and Pl\"ucker coordinate. CDA improves observation diversity under various configurations via a training-free dynamic novel-view image synthesis scheme. Extensive experiments demonstrate that CoIn3D achieves strong cross-configuration performance on landmark datasets such as NuScenes, Waymo, and Lyft, under three dominant MC3D paradigms represented by BEVDepth, BEVFormer, and PETR.
Problem

Research questions and friction points this paper is trying to address.

multi-camera 3D object detection
configuration invariance
spatial prior discrepancy
cross-configuration generalization
camera configuration
Innovation

Methods, ideas, or system contributions that make the work stand out.

configuration-invariant
spatial-aware feature modulation
camera-aware data augmentation
multi-camera 3D object detection
cross-configuration generalization
🔎 Similar Papers
No similar papers found.