Perception-Enhanced Multitask Multimodal Semantic Communication for UAV-Assisted Integrated Sensing and Communication System

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of multi-task semantic transmission for hyperspectral and LiDAR multimodal data in UAV-assisted air-ground integrated sensing and communication (ISAC) systems—under constraints of limited channel bandwidth and dynamic environments—this paper proposes a perception-enhanced multimodal semantic communication architecture. The method jointly optimizes semantic encoding and transmission by integrating channel state information and downstream task requirements. Key contributions include: (1) a channel- and task-aware adaptive feature fusion mechanism; (2) a lightweight attention module embedded on-board to enable coarse-grained object classification-guided semantic compression; and (3) synergistic edge perception computing and robust semantic coding to balance reconstruction fidelity and classification accuracy. Experimental results demonstrate a 5–10% improvement in object classification accuracy, with negligible degradation in reconstruction quality and manageable computational overhead—significantly enhancing the efficiency and reliability of multimodal semantic transmission in complex operational scenarios.

Technology Category

Application Category

📝 Abstract
Recent advances in integrated sensing and communication (ISAC) unmanned aerial vehicles (UAVs) have enabled their widespread deployment in critical applications such as emergency management. This paper investigates the challenge of efficient multitask multimodal data communication in UAV-assisted ISAC systems, in the considered system model, hyperspectral (HSI) and LiDAR data are collected by UAV-mounted sensors for both target classification and data reconstruction at the terrestrial BS. The limited channel capacity and complex environmental conditions pose significant challenges to effective air-to-ground communication. To tackle this issue, we propose a perception-enhanced multitask multimodal semantic communication (PE-MMSC) system that strategically leverages the onboard computational and sensing capabilities of UAVs. In particular, we first propose a robust multimodal feature fusion method that adaptively combines HSI and LiDAR semantics while considering channel noise and task requirements. Then the method introduces a perception-enhanced (PE) module incorporating attention mechanisms to perform coarse classification on UAV side, thereby optimizing the attention-based multimodal fusion and transmission. Experimental results demonstrate that the proposed PE-MMSC system achieves 5%--10% higher target classification accuracy compared to conventional systems without PE module, while maintaining comparable data reconstruction quality with acceptable computational overheads.
Problem

Research questions and friction points this paper is trying to address.

Efficient multitask multimodal data communication in UAV-assisted ISAC systems
Limited channel capacity and complex environmental conditions for air-to-ground communication
Robust multimodal feature fusion for HSI and LiDAR data under noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

Perception-enhanced multitask multimodal semantic communication
Robust multimodal feature fusion method
Attention mechanisms for coarse classification
🔎 Similar Papers
No similar papers found.