🤖 AI Summary
This work addresses the high computational complexity and insufficient local detail modeling in existing Transformer-based RGB-D salient object detection methods. To this end, we propose STENet, a novel dual-stream encoder-decoder architecture that introduces a superpixel-driven cross-modal interaction mechanism. Our approach incorporates global superpixel attention enhancement and local superpixel attention refinement modules, effectively balancing region-level semantic representation and fine-grained detail preservation while reducing computational overhead. Coupled with an improved superpixel generation strategy and multi-scale feature fusion, the proposed method achieves state-of-the-art performance across seven widely used RGB-D salient object detection benchmarks, significantly outperforming existing approaches.
📝 Abstract
Transformer-based methods for RGB-D Salient Object Detection (SOD) have gained significant interest, owing to the transformer's exceptional capacity to capture long-range pixel dependencies. Nevertheless, current RGB-D SOD methods face challenges, such as the quadratic complexity of the attention mechanism and the limited local detail extraction. To overcome these limitations, we propose a novel Superpixel Token Enhancing Network (STENet), which introduces superpixels into cross-modal interaction. STENet follows the two-stream encoder-decoder structure. Its cores are two tailored superpixel-driven cross-modal interaction modules, responsible for global and local feature enhancement. Specifically, we update the superpixel generation method by expanding the neighborhood range of each superpixel, allowing for flexible transformation between pixels and superpixels. With the updated superpixel generation method, we first propose the Superpixel Attention Global Enhancing Module to model the global pixel-to-superpixel relationship rather than the traditional global pixel-to-pixel relationship, which can capture region-level information and reduce computational complexity. We also propose the Superpixel Attention Local Refining Module, which leverages pixel similarity within superpixels to filter out a subset of pixels (i.e., local pixels) and then performs feature enhancement on these local pixels, thereby capturing concerned local details. Furthermore, we fuse the globally and locally enhanced features along with the cross-scale features to achieve comprehensive feature representation. Experiments on seven RGB-D SOD datasets reveal that our STENet achieves competitive performance compared to state-of-the-art methods. The code and results of our method are available at https://github.com/Mark9010/STENet.