RED: Robust Event-Guided Motion Deblurring with Modality-Specific Disentangled Representation

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Event cameras inherently produce incomplete event streams due to their threshold-driven spiking mechanism, compromising motion priors and limiting the performance of event-guided image deblurring. To address this, we propose a modality-specific disentangled representation framework. Our method introduces a robustness-oriented event perturbation strategy and a disentangled OmniAttention mechanism to explicitly model intra- and inter-modal correlations between events and images. Key components include stochastic event masking, modality-specific feature disentanglement, enhanced cross-modal interaction, and motion-sensitive region attention optimization—collectively enabling adaptive response to incomplete event streams and semantic compensation. Extensive experiments on both synthetic and real-world benchmarks demonstrate state-of-the-art performance, with significant improvements in deblurred image sharpness, fine-detail recovery, and scene robustness.

Technology Category

Application Category

📝 Abstract
Event cameras provide sparse yet temporally high-temporal-resolution motion information, demonstrating great potential for motion deblurring. Existing methods focus on cross-modal interaction, overlooking the inherent incompleteness of event streams, which arises from the trade-off between sensitivity and noise introduced by the thresholding mechanism of Dynamic Vision Sensors (DVS). Such degradation compromises the integrity of motion priors and limits the effectiveness of event-guided deblurring. To tackle these challenges, we propose a Robust Event-guided Deblurring (RED) network with modality-specific disentangled representation. First, we introduce a Robustness-Oriented Perturbation Strategy (RPS) that applies random masking to events, which exposes RED to incomplete patterns and then foster robustness against various unknown scenario conditions.Next, a disentangled OmniAttention is presented to explicitly model intra-motion, inter-motion, and cross-modality correlations from two inherently distinct but complementary sources: blurry images and partially disrupted events. Building on these reliable features, two interactive modules are designed to enhance motion-sensitive areas in blurry images and inject semantic context into incomplete event representations. Extensive experiments on synthetic and real-world datasets demonstrate RED consistently achieves state-of-the-art performance in both accuracy and robustness.
Problem

Research questions and friction points this paper is trying to address.

Addressing event stream incompleteness in motion deblurring
Enhancing robustness against degraded event data conditions
Improving cross-modality integration between blurry images and events
Innovation

Methods, ideas, or system contributions that make the work stand out.

Robustness-Oriented Perturbation Strategy with random masking
Disentangled OmniAttention modeling cross-modality correlations
Interactive modules enhancing motion-sensitive areas and semantics
🔎 Similar Papers
No similar papers found.