🤖 AI Summary
This work addresses two key challenges in RGB-T semantic segmentation: (1) difficulty distinguishing visually similar categories, and (2) low-efficiency multimodal (RGB/thermal/text) fusion. To this end, we propose a text-aware lightweight framework. Methodologically: (1) We introduce a novel Dynamic Feature Fusion Module (DFFM) to achieve efficient cross-modal alignment between RGB and thermal features; (2) We pioneer the integration of CLIP-derived text embeddings into the mask decoder to enable semantic-level classification refinement and cross-modal alignment; (3) We employ LoRA to fine-tune the frozen SAM image encoder, balancing performance and parameter efficiency. Our approach achieves state-of-the-art results on multiple RGB-T benchmarks, with significant improvements in fine-grained segmentation accuracy, 42% fewer parameters, and a 31% speedup in inference latency.
📝 Abstract
Reliable semantic segmentation of open environments is essential for intelligent systems, yet significant problems remain: 1) Existing RGB-T semantic segmentation models mainly rely on low-level visual features and lack high-level textual information, which struggle with accurate segmentation when categories share similar visual characteristics. 2) While SAM excels in instance-level segmentation, integrating it with thermal images and text is hindered by modality heterogeneity and computational inefficiency. To address these, we propose TASeg, a text-aware RGB-T segmentation framework by using Low-Rank Adaptation (LoRA) fine-tuning technology to adapt vision foundation models. Specifically, we propose a Dynamic Feature Fusion Module (DFFM) in the image encoder, which effectively merges features from multiple visual modalities while freezing SAM's original transformer blocks. Additionally, we incorporate CLIP-generated text embeddings in the mask decoder to enable semantic alignment, which further rectifies the classification error and improves the semantic understanding accuracy. Experimental results across diverse datasets demonstrate that our method achieves superior performance in challenging scenarios with fewer trainable parameters.