MASTER: Multimodal Segmentation with Text Prompts

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses RGB-thermal-language multimodal semantic segmentation under complex environments. We propose a lightweight, highly adaptive fusion framework. Methodologically, we introduce— for the first time— a large language model (LLM) as a learnable cross-modal alignment core, enabling text-prompt-driven dynamic feature fusion. A dual-path image encoder coupled with a codebook-based token generation mechanism preserves modality-specific characteristics while enhancing generalization. Additionally, learnable text embeddings and a lightweight decoder drastically reduce parameter count. Evaluated on a multi-scenario autonomous-driving RGB-Thermal benchmark, our method achieves state-of-the-art performance in both segmentation accuracy and environmental robustness. These results empirically validate the LLM’s efficacy and practicality as a universal multimodal fusion engine.

Technology Category

Application Category

📝 Abstract
RGB-Thermal fusion is a potential solution for various weather and light conditions in challenging scenarios. However, plenty of studies focus on designing complex modules to fuse different modalities. With the widespread application of large language models (LLMs), valuable information can be more effectively extracted from natural language. Therefore, we aim to leverage the advantages of large language models to design a structurally simple and highly adaptable multimodal fusion model architecture. We proposed MultimodAl Segmentation with TExt PRompts (MASTER) architecture, which integrates LLM into the fusion of RGB-Thermal multimodal data and allows complex query text to participate in the fusion process. Our model utilizes a dual-path structure to extract information from different modalities of images. Additionally, we employ LLM as the core module for multimodal fusion, enabling the model to generate learnable codebook tokens from RGB, thermal images, and textual information. A lightweight image decoder is used to obtain semantic segmentation results. The proposed MASTER performs exceptionally well in benchmark tests across various automated driving scenarios, yielding promising results.
Problem

Research questions and friction points this paper is trying to address.

Fuses RGB-Thermal data for diverse weather and light conditions.
Leverages large language models for multimodal fusion simplicity.
Enables text prompts to enhance segmentation in autonomous driving.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLM for RGB-Thermal multimodal fusion
Uses dual-path structure for image modality extraction
Employs lightweight decoder for semantic segmentation
🔎 Similar Papers
No similar papers found.