MMSense: Adapting Vision-based Foundation Model for Multi-task Multi-modal Wireless Sensing

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large AI models in wireless communications are predominantly unimodal and single-task—e.g., channel modeling or beamforming—thus failing to support unified wireless sensing. To address this, we propose the first multi-task, multimodal foundation model tailored for wireless environments: it maps images, radar, LiDAR, and textual inputs into a shared vision-compatible feature space. Our method integrates a vision-language model backbone with cross-modal representation learning, instruction tuning, and multi-task learning, enhanced by a modality-gating mechanism, uncertainty-weighted loss, and task-specific sequence attention to enable cross-modal alignment and instruction-driven generalization. Evaluated on real-world wireless scenario datasets, our model significantly outperforms both domain-specific models and general-purpose large models, demonstrating superior generalization and performance in joint environmental, channel, and human sensing.

Technology Category

Application Category

📝 Abstract
Large AI models have been widely adopted in wireless communications for channel modeling, beamforming, and resource optimization. However, most existing efforts remain limited to single-modality inputs and channel-specific objec- tives, overlooking the broader potential of large foundation models for unified wireless sensing. To bridge this gap, we propose MMSense, a multi-modal, multi-task foundation model that jointly addresses channel-centric, environment-aware, and human-centered sensing. Our framework integrates image, radar, LiDAR, and textual data by transforming them into vision- compatible representations, enabling effective cross-modal align- ment within a unified feature space. A modality gating mecha- nism adaptively fuses these representations, while a vision-based large language model backbone enables unified feature align- ment and instruction-driven task adaptation. Furthermore, task- specific sequential attention and uncertainty-based loss weighting mechanisms enhance cross-task generalization. Experiments on real wireless scenario datasets show that our approach outper- forms both task-specific and large-model baselines, confirming its strong generalization across heterogeneous sensing tasks.
Problem

Research questions and friction points this paper is trying to address.

Adapting vision foundation models for multi-modal wireless sensing
Integrating image radar LiDAR text into unified feature space
Enabling cross-task generalization for heterogeneous sensing objectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts vision foundation model for wireless sensing
Integrates multi-modal data via vision-compatible representations
Uses modality gating and uncertainty-based loss weighting
🔎 Similar Papers
No similar papers found.