TASeg: Text-aware RGB-T Semantic Segmentation based on Fine-tuning Vision Foundation Models

📅 2025-06-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses two key challenges in RGB-T semantic segmentation: (1) difficulty distinguishing visually similar categories, and (2) low-efficiency multimodal (RGB/thermal/text) fusion. To this end, we propose a text-aware lightweight framework. Methodologically: (1) We introduce a novel Dynamic Feature Fusion Module (DFFM) to achieve efficient cross-modal alignment between RGB and thermal features; (2) We pioneer the integration of CLIP-derived text embeddings into the mask decoder to enable semantic-level classification refinement and cross-modal alignment; (3) We employ LoRA to fine-tune the frozen SAM image encoder, balancing performance and parameter efficiency. Our approach achieves state-of-the-art results on multiple RGB-T benchmarks, with significant improvements in fine-grained segmentation accuracy, 42% fewer parameters, and a 31% speedup in inference latency.

Technology Category

Application Category

📝 Abstract
Reliable semantic segmentation of open environments is essential for intelligent systems, yet significant problems remain: 1) Existing RGB-T semantic segmentation models mainly rely on low-level visual features and lack high-level textual information, which struggle with accurate segmentation when categories share similar visual characteristics. 2) While SAM excels in instance-level segmentation, integrating it with thermal images and text is hindered by modality heterogeneity and computational inefficiency. To address these, we propose TASeg, a text-aware RGB-T segmentation framework by using Low-Rank Adaptation (LoRA) fine-tuning technology to adapt vision foundation models. Specifically, we propose a Dynamic Feature Fusion Module (DFFM) in the image encoder, which effectively merges features from multiple visual modalities while freezing SAM's original transformer blocks. Additionally, we incorporate CLIP-generated text embeddings in the mask decoder to enable semantic alignment, which further rectifies the classification error and improves the semantic understanding accuracy. Experimental results across diverse datasets demonstrate that our method achieves superior performance in challenging scenarios with fewer trainable parameters.
Problem

Research questions and friction points this paper is trying to address.

Lack of high-level text info in RGB-T segmentation models
Difficulty integrating SAM with thermal images and text
Need for accurate semantic alignment in segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LoRA fine-tuning for vision foundation models
Dynamic Feature Fusion Module for multi-modal merging
CLIP text embeddings for semantic alignment
🔎 Similar Papers
No similar papers found.
M
Meng Yu
School of Automation, Beijing Institute of Technology, Beijing 100081, China
Te Cui
Te Cui
Beijing Institute of Technology
Embodied AI
Q
Qitong Chu
School of Automation, Beijing Institute of Technology, Beijing 100081, China
Wenjie Song
Wenjie Song
School of Automation, Beijing Institute of Technology, Beijing 100081, China
Y
Yi Yang
School of Automation, Beijing Institute of Technology, Beijing 100081, China
Y
Yufeng Yue
School of Automation, Beijing Institute of Technology, Beijing 100081, China