đ€ AI Summary
This work addresses the joint problem of deblurring severely motion-blurred single imagesâexhibiting large translational and rotational motionâand estimating the underlying camera trajectory. We propose an end-to-end framework that synergistically integrates model-driven and data-driven principles. Our key innovations include: (i) a differentiable projection-based motion blur model that explicitly incorporates 3D camera rotation trajectories into the blur formation process; (ii) a modular neural network jointly predicting trajectory parameters and the latent sharp image; and (iii) a re-blurring loss enabling closed-loop optimization. The resulting method achieves strong physical interpretability and generalization capability. It sets new state-of-the-art performance on both synthetic and real-world benchmarks, significantly outperforming existing approachesâespecially under spatially varying and heavy blur conditionsâwhile simultaneously delivering high-fidelity restored images and accurate 3D camera trajectories.
đ Abstract
Motion blur caused by camera shake, particularly under large or rotational movements, remains a major challenge in image restoration. We propose a deep learning framework that jointly estimates the latent sharp image and the underlying camera motion trajectory from a single blurry image. Our method leverages the Projective Motion Blur Model (PMBM), implemented efficiently using a differentiable blur creation module compatible with modern networks. A neural network predicts a full 3D rotation trajectory, which guides a model-based restoration network trained end-to-end. This modular architecture provides interpretability by revealing the camera motion that produced the blur. Moreover, this trajectory enables the reconstruction of the sequence of sharp images that generated the observed blurry image. To further refine results, we optimize the trajectory post-inference via a reblur loss, improving consistency between the blurry input and the restored output. Extensive experiments show that our method achieves state-of-the-art performance on both synthetic and real datasets, particularly in cases with severe or spatially variant blur, where end-to-end deblurring networks struggle.
Code and trained models are available at https://github.com/GuillermoCarbajal/Blur2Seq/