🤖 AI Summary
Existing motion restoration benchmarks inadequately cover complex dynamic scenes—such as strong ego-motion, multi-agent interactions, and depth-dependent blur—limiting algorithm evaluation and generalization. To address this, we introduce two high-frame-rate (1000 FPS), multi-task image restoration benchmarks featuring the first explicit, controllable modeling of motion magnitude. We propose a flow-guided adaptive blur synthesis method that integrates depth-aware modeling with frame averaging to generate high-fidelity, scalable ground-truth sequences. Furthermore, we establish the first standardized evaluation benchmark spanning motion intensities from minimal to extreme. This benchmark enables joint assessment of video interpolation, optical flow estimation, and deblurring, significantly improving comparability and interpretability of algorithm performance under large-motion blur and complex dynamics. It provides a high-precision, highly controllable testbed for next-generation video restoration models.
📝 Abstract
We introduce MIORe and VAR-MIORe, two novel multi-task datasets that address critical limitations in current motion restoration benchmarks. Designed with high-frame-rate (1000 FPS) acquisition and professional-grade optics, our datasets capture a broad spectrum of motion scenarios, which include complex ego-camera movements, dynamic multi-subject interactions, and depth-dependent blur effects. By adaptively averaging frames based on computed optical flow metrics, MIORe generates consistent motion blur, and preserves sharp inputs for video frame interpolation and optical flow estimation. VAR-MIORe further extends by spanning a variable range of motion magnitudes, from minimal to extreme, establishing the first benchmark to offer explicit control over motion amplitude. We provide high-resolution, scalable ground truths that challenge existing algorithms under both controlled and adverse conditions, paving the way for next-generation research of various image and video restoration tasks.