🤖 AI Summary
In highly dynamic vehicular networks, millimeter-wave (mmWave) communication suffers from excessive overhead and limited availability due to exhaustive beam training. To address this, this paper proposes a vision- and GPS-based multimodal perception fusion framework for beam direction prediction. We innovatively design modality-specific encoders coupled with a Transformer-based cross-modal fusion architecture to enable end-to-end optimal beam prediction, facilitating proactive beam establishment and substantially reducing the beam search space. Evaluated on a real-world V2V multimodal dataset, our method achieves a Top-15 prediction accuracy of 77.58%, incurs only 2.32 dB average power loss, and reduces beam search overhead by 76.56% compared to conventional schemes. These results demonstrate significant improvements in link reliability and communication efficiency under high-mobility conditions.
📝 Abstract
Beamforming techniques are utilized in millimeter wave (mmWave) communication to address the inherent path loss limitation, thereby establishing and maintaining reliable connections. However, adopting standard defined beamforming approach in highly dynamic vehicular environments often incurs high beam training overheads and reduces the available airtime for communications, which is mainly due to exchanging pilot signals and exhaustive beam measurements. To this end, we present a multi-modal sensing and fusion learning framework as a potential alternative solution to reduce such overheads. In this framework, we first extract the features individually from the visual and GPS coordinates sensing modalities by modality specific encoders, and subsequently fuse the multimodal features to obtain predicted top-k beams so that the best line-of-sight links can be proactively established. To show the generalizability of the proposed framework, we perform a comprehensive experiment in four different vehicle-to-vehicle (V2V) scenarios from real-world multi-modal sensing and communication dataset. From the experiment, we observe that the proposed framework achieves up to 77.58% accuracy on predicting top-15 beams correctly, outperforms single modalities, incurs roughly as low as 2.32 dB average power loss, and considerably reduces the beam searching space overheads by 76.56% for top-15 beams with respect to standard defined approach.