🤖 AI Summary
This work addresses the inefficiency and poor robustness of video question answering (VideoQA) over noisy or fading wireless channels. We propose the first end-to-end semantic communication framework tailored for VideoQA, which abandons conventional pixel-level reconstruction in favor of direct transmission of task-relevant spatiotemporal semantics. Methodologically, we design a task-driven spatiotemporal semantic encoder and introduce, for the first time, bandwidth-adaptive deep joint source-channel coding (DJSCC) to construct a fully differentiable semantic communication architecture. Experiments demonstrate that our approach improves VideoQA accuracy by 5.17% under low signal-to-noise ratio (SNR) conditions, while consuming only approximately 0.5% of the bandwidth required by both conventional and state-of-the-art DJSCC methods. This yields substantial gains in bandwidth efficiency and channel robustness, establishing a novel paradigm for deploying semantic communication in video understanding tasks.
📝 Abstract
Although semantic communication (SC) has shown its potential in efficiently transmitting multimodal data such as texts, speeches and images, SC for videos has focused primarily on pixel-level reconstruction. However, these SC systems may be suboptimal for downstream intelligent tasks. Moreover, SC systems without pixel-level video reconstruction present advantages by achieving higher bandwidth efficiency and real-time performance of various intelligent tasks. The difficulty in such system design lies in the extraction of task-related compact semantic representations and their accurate delivery over noisy channels. In this paper, we propose an end-to-end SC system, named VideoQA-SC for video question answering (VideoQA) tasks. Our goal is to accomplish VideoQA tasks directly based on video semantics over noisy or fading wireless channels, bypassing the need for video reconstruction at the receiver. To this end, we develop a spatiotemporal semantic encoder for effective video semantic extraction, and a learning-based bandwidth-adaptive deep joint source-channel coding (DJSCC) scheme for efficient and robust video semantic transmission. Experiments demonstrate that VideoQA-SC outperforms traditional and advanced DJSCC-based SC systems that rely on video reconstruction at the receiver under a wide range of channel conditions and bandwidth constraints. In particular, when the signal-to-noise ratio is low, VideoQA-SC can improve the answer accuracy by 5.17% while saving almost 99.5% of the bandwidth at the same time, compared with the advanced DJSCC-based SC system. Our results show the great potential of SC system design for video applications.