🤖 AI Summary
To address foreground non-target region over-activation and background leakage in CLIP-based weakly supervised semantic segmentation, this paper proposes a dual-reconstruction correction framework integrating semantic and spatial constraints. Methodologically, it innovatively combines cross-modal prototype alignment (CMPA) and superpixel-guided correction (SGC): CMPA mitigates inter-class semantic confusion via contrastive learning to align text and image prototypes; SGC leverages superpixel-level spatial priors to regularize feature affinity propagation, thereby improving localization accuracy and suppressing spurious responses. As a single-stage framework, it requires no multi-stage training or auxiliary annotations. On PASCAL VOC and MS COCO, it achieves 79.5% and 50.6% mIoU, respectively—outperforming existing single-stage methods and even several multi-stage approaches. These results demonstrate the effectiveness and generalizability of the dual-reconstruction mechanism for weakly supervised segmentation.
📝 Abstract
In recent years, Contrastive Language-Image Pretraining (CLIP) has been widely applied to Weakly Supervised Semantic Segmentation (WSSS) tasks due to its powerful cross-modal semantic understanding capabilities. This paper proposes a novel Semantic and Spatial Rectification (SSR) method to address the limitations of existing CLIP-based weakly supervised semantic segmentation approaches: over-activation in non-target foreground regions and background areas. Specifically, at the semantic level, the Cross-Modal Prototype Alignment (CMPA) establishes a contrastive learning mechanism to enforce feature space alignment across modalities, reducing inter-class overlap while enhancing semantic correlations, to rectify over-activation in non-target foreground regions effectively; at the spatial level, the Superpixel-Guided Correction (SGC) leverages superpixel-based spatial priors to precisely filter out interference from non-target regions during affinity propagation, significantly rectifying background over-activation. Extensive experiments on the PASCAL VOC and MS COCO datasets demonstrate that our method outperforms all single-stage approaches, as well as more complex multi-stage approaches, achieving mIoU scores of 79.5% and 50.6%, respectively.