ShapeGen4D: Towards High Quality 4D Shape Generation from Videos

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of reconstructing temporally coherent, view-consistent dynamic geometry and appearance directly from monocular video. The proposed end-to-end method generates 4D (3D + time) shapes without per-frame optimization. Methodologically, it introduces a temporal attention mechanism to model non-rigid motion, employs temporal-aware point sampling and 4D latent anchoring to capture structural evolution, and enforces temporal consistency via cross-frame noise sharing. It further integrates large-scale pretrained 3D priors, video-conditioned implicit neural representations, and joint spatiotemporal optimization. Experiments on real-world videos demonstrate substantial improvements in generation robustness and visual fidelity: the approach effectively suppresses topological artifacts and flickering while achieving high temporal coherence and geometric consistency—marking the first demonstration of high-quality, post-processing-free 4D shape synthesis.

Technology Category

Application Category

📝 Abstract
Video-conditioned 4D shape generation aims to recover time-varying 3D geometry and view-consistent appearance directly from an input video. In this work, we introduce a native video-to-4D shape generation framework that synthesizes a single dynamic 3D representation end-to-end from the video. Our framework introduces three key components based on large-scale pre-trained 3D models: (i) a temporal attention that conditions generation on all frames while producing a time-indexed dynamic representation; (ii) a time-aware point sampling and 4D latent anchoring that promote temporally consistent geometry and texture; and (iii) noise sharing across frames to enhance temporal stability. Our method accurately captures non-rigid motion, volume changes, and even topological transitions without per-frame optimization. Across diverse in-the-wild videos, our method improves robustness and perceptual fidelity and reduces failure modes compared with the baselines.
Problem

Research questions and friction points this paper is trying to address.

Generating 4D shapes from video input
Creating dynamic 3D geometry and appearance
Ensuring temporal consistency in 4D generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Temporal attention for dynamic representation generation
Time-aware sampling for consistent geometry and texture
Noise sharing across frames enhances temporal stability
🔎 Similar Papers
No similar papers found.