Exploring Token-Level Augmentation in Vision Transformer for Semi-Supervised Semantic Segmentation

๐Ÿ“… 2025-03-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the performance degradation of Vision Transformers (ViTs) in semi-supervised semantic segmentation under scarce labeled data, this paper proposes TokenMixโ€”the first token-level mixup augmentation method tailored for ViTs. Our approach tackles two key challenges: (1) a dual-level augmentation strategy operating jointly on image and feature spaces, integrating CutMix, RandAugment, and feature-space perturbations; and (2) a dual-branch consistency regularization framework explicitly designed to align with ViTโ€™s global self-attention mechanism, thereby improving pseudo-label reliability through collaborative optimization. Evaluated on Pascal VOC 2012, Cityscapes, and COCO benchmarks, TokenMix consistently outperforms state-of-the-art methods, achieving absolute mIoU gains of over 3.2% under extremely low labeling ratios (e.g., 1/16 or 1/32 labels). These results empirically validate the effectiveness and generalizability of token-level modeling for semi-supervised ViT-based segmentation.

Technology Category

Application Category

๐Ÿ“ Abstract
Semi-supervised semantic segmentation has witnessed remarkable advancements in recent years. However, existing algorithms are based on convolutional neural networks and directly applying them to Vision Transformers poses certain limitations due to conceptual disparities. To this end, we propose TokenMix, a data augmentation technique specifically designed for semi-supervised semantic segmentation with Vision Transformers. TokenMix aligns well with the global attention mechanism by mixing images at the token level, enhancing learning capability for contexutual information among image patches. We further incorporate image augmentation and feature augmentation to promote the diversity of augmentation. Moreover, to enhance consistency regularization, we propose a dual-branch framework where each branch applies both image augmentation and feature augmentation to the input image. We conduct extensive experiments across multiple benchmark datasets, including Pascal VOC 2012, Cityscapes, and COCO. Results suggest that the proposed method outperforms state-of-the-art algorithms with notably observed accuracy improvement, especially under the circumstance of limited fine annotations.
Problem

Research questions and friction points this paper is trying to address.

Enhance semi-supervised semantic segmentation using Vision Transformers
Develop TokenMix for token-level image mixing in Vision Transformers
Improve accuracy with limited annotations via dual-branch augmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

TokenMix for token-level image mixing
Dual-branch framework for consistency regularization
Combines image and feature augmentation techniques
๐Ÿ”Ž Similar Papers
No similar papers found.
Dengke Zhang
Dengke Zhang
second-year PhD student at South China University of Technology
Computer VisionImage Segmentation
Quan Tang
Quan Tang
Pengcheng Laboratory
Computer VisionAnomaly DetectionDeep Learning
F
Fagui Liu
School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China; Department of New Network, Pengcheng Laboratory, Shenzhen 518000, China
C
C. L. Philip Chen
School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China; State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100080, China
H
Haiqing Mei