🤖 AI Summary
Existing monocular 3D detectors adopt a decoupled prediction paradigm, neglecting intrinsic geometric constraints among attributes—such as depth, 3D dimensions, and orientation—leading to geometric inconsistency and limiting detection accuracy. To address this, we propose Spatial-Projection Alignment (SPA), the first framework jointly modeling spatial point alignment (enforcing physical consistency between 3D center and dimensions) and 3D–2D projection alignment (ensuring reprojection fidelity). We further introduce a hierarchical task learning strategy to mitigate spatial drift and projection misalignment induced by decoupling. SPA requires no backbone modification and is plug-and-play compatible with mainstream detectors. Extensive experiments on KITTI and nuScenes benchmarks demonstrate significant improvements in 3D detection accuracy, validating both its effectiveness and generalizability across diverse architectures and datasets.
📝 Abstract
Existing monocular 3D detectors typically tame the pronounced nonlinear regression of 3D bounding box through decoupled prediction paradigm, which employs multiple branches to estimate geometric center, depth, dimensions, and rotation angle separately. Although this decoupling strategy simplifies the learning process, it inherently ignores the geometric collaborative constraints between different attributes, resulting in the lack of geometric consistency prior, thereby leading to suboptimal performance. To address this issue, we propose novel Spatial-Projection Alignment (SPAN) with two pivotal components: (i). Spatial Point Alignment enforces an explicit global spatial constraint between the predicted and ground-truth 3D bounding boxes, thereby rectifying spatial drift caused by decoupled attribute regression. (ii). 3D-2D Projection Alignment ensures that the projected 3D box is aligned tightly within its corresponding 2D detection bounding box on the image plane, mitigating projection misalignment overlooked in previous works. To ensure training stability, we further introduce a Hierarchical Task Learning strategy that progressively incorporates spatial-projection alignment as 3D attribute predictions refine, preventing early stage error propagation across attributes. Extensive experiments demonstrate that the proposed method can be easily integrated into any established monocular 3D detector and delivers significant performance improvements.