🤖 AI Summary
This work addresses the excessive reliance on ideal angular priors in high-precision beam prediction for next-generation wireless communications by proposing the first beam prediction framework based on a multimodal large language model (MLLM). The approach integrates RGB images and LiDAR point clouds through dedicated modality encoders, a beam-guided attention mechanism, and a high-frequency temporal alignment strategy to achieve robust cross-modal feature fusion in dynamic environments. The authors construct Multimodal-Wireless, the first large-scale multimodal dataset for wireless communication, and validate their method in high-fidelity ray-tracing simulations. The framework achieves a top-1 prediction accuracy of 80.8% and a normalized beamforming gain of 89.1%, significantly outperforming existing approaches.
📝 Abstract
Accurate beam prediction is a key enabler for next-generation wireless communication systems. In this paper, we propose a multimodal large language model (LLM)-based beam prediction framework that effectively utilizes contextual information, provided by sensory data including RGB camera images and LiDAR point clouds. To effectively fuse heterogeneous modalities, we design specialized modality encoders together with a beam-guided attention masking mechanism and a high-frequency temporal alignment strategy, enabling robust cross-modal feature integration under dynamic environments. Furthermore, we construct a large-scale multimodal dataset for communication, named Multimodal-Wireless, which covers diverse weather and traffic conditions with high-fidelity ray-tracing labels. Extensive simulation results demonstrate that the proposed approach significantly reduces the reliance on oracle angle-of-departure knowledge and consistently outperforms state-of-the-art multimodal LLM-based beam prediction methods in terms of beam accuracy and communication performance, improving the average Top-1 accuracy to 80.8% and the average normalized gain to 89.1%.