SCHNet: SAM Marries CLIP for Human Parsing

📅 2025-03-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of jointly achieving fine-grained segmentation and high-level semantic understanding in human parsing, this paper proposes SAM-CLIP, a synergistic framework that enables end-to-end co-optimization of Segment Anything Model (SAM) and CLIP for pixel-level semantic segmentation—first of its kind. It introduces a feature alignment and semantic-guided fusion mechanism, coupled with a lightweight semantic refinement module and an efficient fine-tuning strategy, which injects CLIP’s cross-modal semantic priors into SAM while preserving its spatial precision. The method achieves state-of-the-art performance on LIP, PPP, and CIHP benchmarks, with significantly lower training cost than full-parameter fine-tuning and real-time inference speed. Its core contribution lies in establishing a novel paradigm for semantic–geometric joint optimization across foundational vision models.

Technology Category

Application Category

📝 Abstract
Vision Foundation Model (VFM) such as the Segment Anything Model (SAM) and Contrastive Language-Image Pre-training Model (CLIP) has shown promising performance for segmentation and detection tasks. However, although SAM excels in fine-grained segmentation, it faces major challenges when applying it to semantic-aware segmentation. While CLIP exhibits a strong semantic understanding capability via aligning the global features of language and vision, it has deficiencies in fine-grained segmentation tasks. Human parsing requires to segment human bodies into constituent parts and involves both accurate fine-grained segmentation and high semantic understanding of each part. Based on traits of SAM and CLIP, we formulate high efficient modules to effectively integrate features of them to benefit human parsing. We propose a Semantic-Refinement Module to integrate semantic features of CLIP with SAM features to benefit parsing. Moreover, we formulate a high efficient Fine-tuning Module to adjust the pretrained SAM for human parsing that needs high semantic information and simultaneously demands spatial details, which significantly reduces the training time compared with full-time training and achieves notable performance. Extensive experiments demonstrate the effectiveness of our method on LIP, PPP, and CIHP databases.
Problem

Research questions and friction points this paper is trying to address.

Integrate SAM and CLIP for human parsing
Balance fine-grained segmentation and semantic understanding
Reduce training time while maintaining performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates SAM and CLIP for human parsing
Uses Semantic-Refinement Module for semantic features
Employs Fine-tuning Module to reduce training time
🔎 Similar Papers
No similar papers found.