Motion-Aware Adaptive Pixel Pruning for Efficient Local Motion Deblurring

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inefficient computational resource allocation and inadequate spatially-varying blur modeling in local motion deblurring, this paper proposes a motion-aware adaptive pixel pruning framework. Our method introduces: (1) a learnable blur-region prediction mask coupled with intra-frame motion analysis for precise blur localization; (2) structural reparameterization—replacing 3×3 convolutions with equivalent 1×1 convolutions—combined with pixel-wise dynamic pruning to enable demand-driven computation; and (3) end-to-end optimization via motion trajectory modeling and a multi-task loss comprising reconstruction, re-blurring, and mask prediction objectives. Evaluated on both local and global blur benchmarks, our approach surpasses state-of-the-art methods (e.g., LMD-ViT) in restoration accuracy while reducing FLOPs by 49%, achieving significant gains in efficiency and robustness.

Technology Category

Application Category

📝 Abstract
Local motion blur in digital images originates from the relative motion between dynamic objects and static imaging systems during exposure. Existing deblurring methods face significant challenges in addressing this problem due to their inefficient allocation of computational resources and inadequate handling of spatially varying blur patterns. To overcome these limitations, we first propose a trainable mask predictor that identifies blurred regions in the image. During training, we employ blur masks to exclude sharp regions. For inference optimization, we implement structural reparameterization by converting $3 imes 3$ convolutions to computationally efficient $1 imes 1$ convolutions, enabling pixel-level pruning of sharp areas to reduce computation. Second, we develop an intra-frame motion analyzer that translates relative pixel displacements into motion trajectories, establishing adaptive guidance for region-specific blur restoration. Our method is trained end-to-end using a combination of reconstruction loss, reblur loss, and mask loss guided by annotated blur masks. Extensive experiments demonstrate superior performance over state-of-the-art methods on both local and global blur datasets while reducing FLOPs by 49% compared to SOTA models (e.g., LMD-ViT). The source code is available at https://github.com/shangwei5/M2AENet.
Problem

Research questions and friction points this paper is trying to address.

Efficiently allocates resources for local motion deblurring
Handles spatially varying blur patterns adaptively
Reduces computational costs while maintaining performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trainable mask predictor identifies blurred regions
Structural reparameterization optimizes inference with 1x1 convolutions
Intra-frame motion analyzer guides adaptive blur restoration
🔎 Similar Papers
No similar papers found.