STENet: Superpixel Token Enhancing Network for RGB-D Salient Object Detection

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational complexity and insufficient local detail modeling in existing Transformer-based RGB-D salient object detection methods. To this end, we propose STENet, a novel dual-stream encoder-decoder architecture that introduces a superpixel-driven cross-modal interaction mechanism. Our approach incorporates global superpixel attention enhancement and local superpixel attention refinement modules, effectively balancing region-level semantic representation and fine-grained detail preservation while reducing computational overhead. Coupled with an improved superpixel generation strategy and multi-scale feature fusion, the proposed method achieves state-of-the-art performance across seven widely used RGB-D salient object detection benchmarks, significantly outperforming existing approaches.

Technology Category

Application Category

📝 Abstract
Transformer-based methods for RGB-D Salient Object Detection (SOD) have gained significant interest, owing to the transformer's exceptional capacity to capture long-range pixel dependencies. Nevertheless, current RGB-D SOD methods face challenges, such as the quadratic complexity of the attention mechanism and the limited local detail extraction. To overcome these limitations, we propose a novel Superpixel Token Enhancing Network (STENet), which introduces superpixels into cross-modal interaction. STENet follows the two-stream encoder-decoder structure. Its cores are two tailored superpixel-driven cross-modal interaction modules, responsible for global and local feature enhancement. Specifically, we update the superpixel generation method by expanding the neighborhood range of each superpixel, allowing for flexible transformation between pixels and superpixels. With the updated superpixel generation method, we first propose the Superpixel Attention Global Enhancing Module to model the global pixel-to-superpixel relationship rather than the traditional global pixel-to-pixel relationship, which can capture region-level information and reduce computational complexity. We also propose the Superpixel Attention Local Refining Module, which leverages pixel similarity within superpixels to filter out a subset of pixels (i.e., local pixels) and then performs feature enhancement on these local pixels, thereby capturing concerned local details. Furthermore, we fuse the globally and locally enhanced features along with the cross-scale features to achieve comprehensive feature representation. Experiments on seven RGB-D SOD datasets reveal that our STENet achieves competitive performance compared to state-of-the-art methods. The code and results of our method are available at https://github.com/Mark9010/STENet.
Problem

Research questions and friction points this paper is trying to address.

RGB-D Salient Object Detection
Transformer
Attention Mechanism
Local Detail Extraction
Computational Complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

superpixel
cross-modal interaction
global-local attention
RGB-D salient object detection
transformer
🔎 Similar Papers
No similar papers found.
J
Jianlin Chen
School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
G
Gongyang Li
School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China; Yunnan Key Laboratory of Service Computing, Yunnan University of Finance and Economics, Kunming 650000, China
Z
Zhijiang Zhang
School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
Liang Chang
Liang Chang
University of Electronic Science and Technology of China
Nonvolatile MemoryAI processorComputing-in-Memory Architecture
Dan Zeng
Dan Zeng
Sun Yat-sen University
Biometricscomputer visiondeep learning