In-2-4D: Inbetweening from Two Single-View Images to 4D Generation

๐Ÿ“… 2025-04-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper introduces a novel taskโ€”bilateral image-to-4D motion interpolation: generating a geometrically consistent and physically plausible full 4D (3D + temporal) motion sequence from only two monocular images depicting the start and end poses of an object. Methodologically, we propose a hierarchical keyframe-guided piecewise dynamic Gaussian splatting framework, integrating a learnable non-rigid deformation field and boundary interpolation constraints. To ensure temporal coherence, we design a cross-timestep multi-view diffusion self-attention mechanism and impose rigid transformation regularization. Experiments demonstrate that our approach significantly outperforms existing baselines in qualitative evaluation, quantitative metrics (e.g., Chamfer distance, motion smoothness), and user studies. It robustly reconstructs large-displacement and highly non-rigid motions, producing flicker-free, geometrically coherent, and temporally smooth 4D content.

Technology Category

Application Category

๐Ÿ“ Abstract
We propose a new problem, In-2-4D, for generative 4D (i.e., 3D + motion) inbetweening from a minimalistic input setting: two single-view images capturing an object in two distinct motion states. Given two images representing the start and end states of an object in motion, our goal is to generate and reconstruct the motion in 4D. We utilize a video interpolation model to predict the motion, but large frame-to-frame motions can lead to ambiguous interpretations. To overcome this, we employ a hierarchical approach to identify keyframes that are visually close to the input states and show significant motion, then generate smooth fragments between them. For each fragment, we construct the 3D representation of the keyframe using Gaussian Splatting. The temporal frames within the fragment guide the motion, enabling their transformation into dynamic Gaussians through a deformation field. To improve temporal consistency and refine 3D motion, we expand the self-attention of multi-view diffusion across timesteps and apply rigid transformation regularization. Finally, we merge the independently generated 3D motion segments by interpolating boundary deformation fields and optimizing them to align with the guiding video, ensuring smooth and flicker-free transitions. Through extensive qualitative and quantitiave experiments as well as a user study, we show the effectiveness of our method and its components. The project page is available at https://in-2-4d.github.io/
Problem

Research questions and friction points this paper is trying to address.

Generating 4D motion from two single-view images
Overcoming ambiguity in large frame-to-frame motions
Ensuring smooth transitions via hierarchical keyframe interpolation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical keyframe identification for motion generation
3D reconstruction using Gaussian Splatting technique
Temporal consistency via cross-timestep self-attention
๐Ÿ”Ž Similar Papers
No similar papers found.
Sauradip Nag
Sauradip Nag
CVSSP, University of Surrey
Computer VisionComputer GraphicsDeep Learning
D
D. Cohen-Or
Tel Aviv University, Israel
H
Hao Zhang
Simon Fraser University, Canada
A
Ali Mahdavi-Amiri
Simon Fraser University, Canada