Object-Scene-Camera Decomposition and Recomposition for Data-Efficient Monocular 3D Object Detection

πŸ“… 2026-02-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Monocular 3D object detection suffers from the strong coupling among objects, scenes, and camera poses in training data, which limits data diversity, leads to model overfitting, and incurs high annotation costs. This work proposes an online decoupling and recombination strategy that explicitly disentangles these three factors for the first time and dynamically synthesizes novel training samples. By leveraging textured 3D point clouds to efficiently decompose input images, the method inserts objects into free space and perturbs camera poses to render photorealistic augmented views, thereby achieving comprehensive combinatorial coverage. Evaluated on KITTI and Waymo, the approach consistently improves performance across five state-of-the-art models under both fully supervised and sparsely annotated settings, significantly reducing reliance on dense annotations.

Technology Category

Application Category

πŸ“ Abstract
Monocular 3D object detection (M3OD) is intrinsically ill-posed, hence training a high-performance deep learning based M3OD model requires a humongous amount of labeled data with complicated visual variation from diverse scenes, variety of objects and camera poses.However, we observe that, due to strong human bias, the three independent entities, i.e., object, scene, and camera pose, are always tightly entangled when an image is captured to construct training data. More specifically, specific 3D objects are always captured in particular scenes with fixed camera poses, and hence lacks necessary diversity. Such tight entanglement induces the challenging issues of insufficient utilization and overfitting to uniform training data. To mitigate this, we propose an online object-scene-camera decomposition and recomposition data manipulation scheme to more efficiently exploit the training data. We first fully decompose training images into textured 3D object point models and background scenes in an efficient computation and storage manner. We then continuously recompose new training images in each epoch by inserting the 3D objects into the freespace of the background scenes, and rendering them with perturbed camera poses from textured 3D point representation. In this way, the refreshed training data in all epochs can cover the full spectrum of independent object, scene, and camera pose combinations. This scheme can serve as a plug-and-play component to boost M3OD models, working flexibly with both fully and sparsely supervised settings. In the sparsely-supervised setting, objects closest to the ego-camera for all instances are sparsely annotated. We then can flexibly increase the annotated objects to control annotation cost. For validation, our method is widely applied to five representative M3OD models and evaluated on both the KITTI and the more complicated Waymo datasets.
Problem

Research questions and friction points this paper is trying to address.

monocular 3D object detection
data diversity
object-scene-camera entanglement
overfitting
training data efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

object-scene-camera decomposition
data-efficient 3D detection
monocular 3D object detection
synthetic data recomposition
plug-and-play augmentation
πŸ”Ž Similar Papers
No similar papers found.