🤖 AI Summary
Current large AI models in wireless communications are predominantly unimodal and single-task—e.g., channel modeling or beamforming—thus failing to support unified wireless sensing. To address this, we propose the first multi-task, multimodal foundation model tailored for wireless environments: it maps images, radar, LiDAR, and textual inputs into a shared vision-compatible feature space. Our method integrates a vision-language model backbone with cross-modal representation learning, instruction tuning, and multi-task learning, enhanced by a modality-gating mechanism, uncertainty-weighted loss, and task-specific sequence attention to enable cross-modal alignment and instruction-driven generalization. Evaluated on real-world wireless scenario datasets, our model significantly outperforms both domain-specific models and general-purpose large models, demonstrating superior generalization and performance in joint environmental, channel, and human sensing.
📝 Abstract
Large AI models have been widely adopted in wireless communications for channel modeling, beamforming, and resource optimization. However, most existing efforts remain limited to single-modality inputs and channel-specific objec- tives, overlooking the broader potential of large foundation models for unified wireless sensing. To bridge this gap, we propose MMSense, a multi-modal, multi-task foundation model that jointly addresses channel-centric, environment-aware, and human-centered sensing. Our framework integrates image, radar, LiDAR, and textual data by transforming them into vision- compatible representations, enabling effective cross-modal align- ment within a unified feature space. A modality gating mecha- nism adaptively fuses these representations, while a vision-based large language model backbone enables unified feature align- ment and instruction-driven task adaptation. Furthermore, task- specific sequential attention and uncertainty-based loss weighting mechanisms enhance cross-task generalization. Experiments on real wireless scenario datasets show that our approach outper- forms both task-specific and large-model baselines, confirming its strong generalization across heterogeneous sensing tasks.