🤖 AI Summary
To address the poor generalization of existing 3D detectors under variable-length RGB-D input sequences in robot navigation, this paper proposes the first end-to-end indoor 3D object detection framework supporting arbitrary frame counts. Methodologically, we introduce a geometric learner and a spatial hybrid attention module to enable efficient interaction between local geometric features and global semantic features; we further propose a novel dynamic token sampling strategy that adaptively adjusts per-frame feature density to ensure consistent global feature distribution after multi-frame fusion. Evaluated on ScanNet, our method achieves state-of-the-art detection accuracy while maintaining stable performance across varying input lengths (1–8 frames), with parameter count comparable to baselines and a lightweight, efficient architecture. The core contribution lies in eliminating the fixed-frame constraint, enabling— for the first time—a single model to robustly process variable-length RGB-D sequences.
📝 Abstract
In this paper, we propose a novel network framework for indoor 3D object detection to handle variable input frame numbers in practical scenarios. Existing methods only consider fixed frames of input data for a single detector, such as monocular RGB-D images or point clouds reconstructed from dense multi-view RGB-D images. While in practical application scenes such as robot navigation and manipulation, the raw input to the 3D detectors is the RGB-D images with variable frame numbers instead of the reconstructed scene point cloud. However, the previous approaches can only handle fixed frame input data and have poor performance with variable frame input. In order to facilitate 3D object detection methods suitable for practical tasks, we present a novel 3D detection framework named AnyView for our practical applications, which generalizes well across different numbers of input frames with a single model. To be specific, we propose a geometric learner to mine the local geometric features of each input RGB-D image frame and implement local-global feature interaction through a designed spatial mixture module. Meanwhile, we further utilize a dynamic token strategy to adaptively adjust the number of extracted features for each frame, which ensures consistent global feature density and further enhances the generalization after fusion. Extensive experiments on the ScanNet dataset show our method achieves both great generalizability and high detection accuracy with a simple and clean architecture containing a similar amount of parameters with the baselines.