🤖 AI Summary
Monocular 3D object detection suffers from depth ambiguity and limited field-of-view, leading to insufficient geometric cues and degraded accuracy under occlusion or truncation. To address this, we propose a local-clustering-driven generalized scene memory framework. First, object-aware K-means clustering is applied to visual features to perform part-level, instance-aware grouping across images, constructing a structured, cross-image scene memory. Second, the clustered features are embedded into a query-based attention mechanism to jointly model local appearance and global scene priors. This design significantly improves feature consistency and geometric robustness for partially visible objects. Evaluated on the KITTI benchmark, our method achieves state-of-the-art performance, with particularly notable gains in detection accuracy and stability for severely occluded and far-range truncated instances.
📝 Abstract
Monocular 3D object detection offers a cost-effective solution for autonomous driving but suffers from ill-posed depth and limited field of view. These constraints cause a lack of geometric cues and reduced accuracy in occluded or truncated scenes. While recent approaches incorporate additional depth information to address geometric ambiguity, they overlook the visual cues crucial for robust recognition. We propose MonoCLUE, which enhances monocular 3D detection by leveraging both local clustering and generalized scene memory of visual features. First, we perform K-means clustering on visual features to capture distinct object-level appearance parts (e.g., bonnet, car roof), improving detection of partially visible objects. The clustered features are propagated across regions to capture objects with similar appearances. Second, we construct a generalized scene memory by aggregating clustered features across images, providing consistent representations that generalize across scenes. This improves object-level feature consistency, enabling stable detection across varying environments. Lastly, we integrate both local cluster features and generalized scene memory into object queries, guiding attention toward informative regions. Exploiting a unified local clustering and generalized scene memory strategy, MonoCLUE enables robust monocular 3D detection under occlusion and limited visibility, achieving state-of-the-art performance on the KITTI benchmark.