🤖 AI Summary
To address the poor inter-frame matching robustness in RGB-D indoor SLAM caused by insufficient semantic object understanding, this paper proposes a gradient-guided hierarchical attention mechanism that explicitly embeds task-driven spatial attention into CNN feature representations. Methodologically, class activation mapping (CAM) and gradient backpropagation are leveraged to generate object-aware attention maps, which are adaptively fused with multi-level CNN features to construct an attention-enhanced feature correspondence model. The key contribution is the first differentiable integration of gradient-based attention directly into the convolutional feature space, enabling precise localization and representation enhancement of semantically salient regions. Experimental evaluation on large-scale indoor scenes demonstrates significant improvements: +4.2% frame-matching accuracy and an 18.7% reduction in absolute trajectory error (ATE), outperforming state-of-the-art baseline methods.
📝 Abstract
Attention models have recently emerged as a powerful approach, demonstrating significant progress in various fields. Visualization techniques, such as class activation mapping, provide visual insights into the reasoning of convolutional neural networks (CNNs). Using network gradients, it is possible to identify regions where the network pays attention during image recognition tasks. Furthermore, these gradients can be combined with CNN features to localize more generalizable, task-specific attentive (salient) regions within scenes. However, explicit use of this gradient-based attention information integrated directly into CNN representations for semantic object understanding remains limited. Such integration is particularly beneficial for visual tasks like simultaneous localization and mapping (SLAM), where CNN representations enriched with spatially attentive object locations can enhance performance. In this work, we propose utilizing task-specific network attention for RGB-D indoor SLAM. Specifically, we integrate layer-wise attention information derived from network gradients with CNN feature representations to improve frame association performance. Experimental results indicate improved performance compared to baseline methods, particularly for large environments.