🤖 AI Summary
In weakly supervised semantic segmentation (WSSS), Class Activation Maps (CAMs) often suffer from insufficient decoupling between embedding space and semantic information, leading to under-activation and co-occurrence noise that degrade segmentation accuracy. To address this, we propose a dual-path embedding optimization framework built upon the Vision Transformer (ViT) architecture, enabling plug-and-play integration. First, we introduce a semantic-aware attention mechanism that dynamically reconstructs token representations—amplifying high-confidence responses while suppressing low-confidence ones. Second, we pioneer an embedding dual-optimization mechanism under class-block interaction, coupled with a hybrid alignment module that fuses RGB features, embedding-guided features, and self-attention weights to enhance candidate token reliability. Evaluated on PASCAL VOC, our method achieves absolute mIoU gains of 3.6%, 1.5%, and 1.2% over prior state-of-the-art methods; on MS COCO, it improves mIoU by 1.2% and 1.6%. These results demonstrate substantial improvements in both CAM quality and segmentation performance.
📝 Abstract
Weakly supervised semantic segmentation (WSSS) typically utilizes limited semantic annotations to obtain initial Class Activation Maps (CAMs). However, due to the inadequate coupling between class activation responses and semantic information in high-dimensional space, the CAM is prone to object co-occurrence or under-activation, resulting in inferior recognition accuracy. To tackle this issue, we propose DOEI, Dual Optimization of Embedding Information, a novel approach that reconstructs embedding representations through semantic-aware attention weight matrices to optimize the expression capability of embedding information. Specifically, DOEI amplifies tokens with high confidence and suppresses those with low confidence during the class-to-patch interaction. This alignment of activation responses with semantic information strengthens the propagation and decoupling of target features, enabling the generated embeddings to more accurately represent target features in high-level semantic space. In addition, we propose a hybrid-feature alignment module in DOEI that combines RGB values, embedding-guided features, and self-attention weights to increase the reliability of candidate tokens. Comprehensive experiments show that DOEI is an effective plug-and-play module that empowers state-of-the-art visual transformer-based WSSS models to significantly improve the quality of CAMs and segmentation performance on popular benchmarks, including PASCAL VOC (+3.6%, +1.5%, +1.2% mIoU) and MS COCO (+1.2%, +1.6% mIoU). Code will be available at https://github.com/AIGeeksGroup/DOEI.