Joint Transmission and Deblurring: A Semantic Communication Approach Using Events

📅 2025-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address poor transmission and reconstruction performance of motion-blurred images—caused by camera shake or rapid object motion—under bandwidth-constrained conditions, this paper proposes the first semantic communication framework integrating RGB blurred images with event camera data. Methodologically, we formulate cross-modal joint deblurring as an end-to-end differentiable joint source-channel coding (JSCC) task for the first time; design a shared-specific information disentanglement transmission mechanism; and introduce a multi-stage collaborative training strategy. Technically, the framework integrates event-stream encoding, cross-modal feature disentanglement, and a deep deblurring decoder. Experiments demonstrate that, at identical bandwidth, our method achieves a PSNR improvement of over 2.1 dB compared to state-of-the-art JSCC approaches, significantly enhancing motion blur restoration quality. This work establishes a novel paradigm for low-latency, high-fidelity visual semantic communication.

Technology Category

Application Category

📝 Abstract
Deep learning-based joint source-channel coding (JSCC) is emerging as a promising technology for effective image transmission. However, most existing approaches focus on transmitting clear images, overlooking real-world challenges such as motion blur caused by camera shaking or fast-moving objects. Motion blur often degrades image quality, making transmission and reconstruction more challenging. Event cameras, which asynchronously record pixel intensity changes with extremely low latency, have shown great potential for motion deblurring tasks. However, the efficient transmission of the abundant data generated by event cameras remains a significant challenge. In this work, we propose a novel JSCC framework for the joint transmission of blurry images and events, aimed at achieving high-quality reconstructions under limited channel bandwidth. This approach is designed as a deblurring task-oriented JSCC system. Since RGB cameras and event cameras capture the same scene through different modalities, their outputs contain both shared and domain-specific information. To avoid repeatedly transmitting the shared information, we extract and transmit their shared information and domain-specific information, respectively. At the receiver, the received signals are processed by a deblurring decoder to generate clear images. Additionally, we introduce a multi-stage training strategy to train the proposed model. Simulation results demonstrate that our method significantly outperforms existing JSCC-based image transmission schemes, addressing motion blur effectively.
Problem

Research questions and friction points this paper is trying to address.

Image Blur
Bandwidth Limitation
Event Camera Data Transmission
Innovation

Methods, ideas, or system contributions that make the work stand out.

Joint Source Channel Coding (JSCC)
Deep Learning
Event Camera and RGB Camera Fusion
🔎 Similar Papers
No similar papers found.
P
Pujing Yang
College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
G
Guangyi Zhang
College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
Yunlong Cai
Yunlong Cai
Zhejiang University
CommunicationsSignal ProcessingWireless Communications
L
Lei Yu
School of Electronic Information, Wuhan University, Wuhan, China
Guanding Yu
Guanding Yu
Zhejiang University
Wireless Communications