SSR: Semantic and Spatial Rectification for CLIP-based Weakly Supervised Segmentation

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address foreground non-target region over-activation and background leakage in CLIP-based weakly supervised semantic segmentation, this paper proposes a dual-reconstruction correction framework integrating semantic and spatial constraints. Methodologically, it innovatively combines cross-modal prototype alignment (CMPA) and superpixel-guided correction (SGC): CMPA mitigates inter-class semantic confusion via contrastive learning to align text and image prototypes; SGC leverages superpixel-level spatial priors to regularize feature affinity propagation, thereby improving localization accuracy and suppressing spurious responses. As a single-stage framework, it requires no multi-stage training or auxiliary annotations. On PASCAL VOC and MS COCO, it achieves 79.5% and 50.6% mIoU, respectively—outperforming existing single-stage methods and even several multi-stage approaches. These results demonstrate the effectiveness and generalizability of the dual-reconstruction mechanism for weakly supervised segmentation.

Technology Category

Application Category

📝 Abstract
In recent years, Contrastive Language-Image Pretraining (CLIP) has been widely applied to Weakly Supervised Semantic Segmentation (WSSS) tasks due to its powerful cross-modal semantic understanding capabilities. This paper proposes a novel Semantic and Spatial Rectification (SSR) method to address the limitations of existing CLIP-based weakly supervised semantic segmentation approaches: over-activation in non-target foreground regions and background areas. Specifically, at the semantic level, the Cross-Modal Prototype Alignment (CMPA) establishes a contrastive learning mechanism to enforce feature space alignment across modalities, reducing inter-class overlap while enhancing semantic correlations, to rectify over-activation in non-target foreground regions effectively; at the spatial level, the Superpixel-Guided Correction (SGC) leverages superpixel-based spatial priors to precisely filter out interference from non-target regions during affinity propagation, significantly rectifying background over-activation. Extensive experiments on the PASCAL VOC and MS COCO datasets demonstrate that our method outperforms all single-stage approaches, as well as more complex multi-stage approaches, achieving mIoU scores of 79.5% and 50.6%, respectively.
Problem

Research questions and friction points this paper is trying to address.

Rectify over-activation in non-target foreground regions
Correct background over-activation using spatial priors
Improve weakly supervised semantic segmentation with CLIP
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic rectification via cross-modal prototype alignment
Spatial rectification using superpixel-guided correction
Addresses over-activation in non-target foreground and background
🔎 Similar Papers
No similar papers found.
Xiuli Bi
Xiuli Bi
Professor of Computer Science, Chongqing University of Posts and Telecommunications
Image ProcessingPattern Recognition
D
Die Xiao
Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
J
Junchao Fan
Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
Bin Xiao
Bin Xiao
Meta GenAI
Computer VisionVision and LanguageMachine LearningHuman Pose Estimation