Towards Intrinsic-Aware Monocular 3D Object Detection

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Monocular 3D object detection is highly sensitive to camera intrinsics, which limits its generalization across diverse imaging setups. This work proposes MonoIA, a novel framework that reformulates intrinsic parameter modeling from numerical conditioning to semantic representation. By leveraging large language models and vision-language models, MonoIA generates intrinsic embeddings that capture high-level semantic cues about camera configurations. These embeddings are then integrated into the detection network via an intrinsic-adaptive module, enabling perceptual-level understanding and robust detection across varying camera setups. Evaluated on standard benchmarks including KITTI, Waymo, and nuScenes, MonoIA achieves new state-of-the-art performance, improving by 1.18% on the KITTI test set and by 4.46% on the KITTI validation set under multi-dataset joint training.
📝 Abstract
Monocular 3D object detection (Mono3D) aims to infer object locations and dimensions in 3D space from a single RGB image. Despite recent progress, existing methods remain highly sensitive to camera intrinsics and struggle to generalize across diverse settings, since intrinsics govern how 3D scenes are projected onto the image plane. We propose MonoIA, a unified intrinsic-aware framework that models and adapts to intrinsic variation through a language-grounded representation. The key insight is that intrinsic variation is not a numeric difference but a perceptual transformation that alters apparent scale, perspective, and spatial geometry. To capture this effect, MonoIA employs large language models and vision-language models to generate intrinsic embeddings that encode the visual and geometric implications of camera parameters. These embeddings are hierarchically integrated into the detection network via an Intrinsic Adaptation Module, allowing the model to modulate its feature representations according to camera-specific configurations and maintain consistent 3D detection across intrinsics. This shifts intrinsic modeling from numeric conditioning to semantic representation, enabling robust and unified perception across cameras. Extensive experiments show that MonoIA achieves new state-of-the-art results on standard benchmarks including KITTI, Waymo, and nuScenes (e.g., +1.18% on the KITTI leaderboard), and further improves performance under multi-dataset training (e.g., +4.46% on KITTI Val).
Problem

Research questions and friction points this paper is trying to address.

Monocular 3D Object Detection
Camera Intrinsics
Generalization
Intrinsic Variation
3D Perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

intrinsic-aware
monocular 3D object detection
language-grounded representation
intrinsic embedding
vision-language model
🔎 Similar Papers
No similar papers found.