š¤ AI Summary
This work addresses the seamless integration of 4D human animation with reconstructed 3D scenes, tackling key challenges including inaccurate human geometric placement, interpenetration (mesh clipping), inconsistent illumination and appearance, and poor controllability of camera trajectories. We propose a three-stage framework: (1) an automatic human placement module leveraging geometric constraints and collision detection to ensure precise scale alignment and penetration avoidance; (2) a training-free rendering-domain style transfer method that harmonizes illumination, reflectance, and material appearance between the human and the scene; and (3) a jointly optimized 4Dā3D view synthesis mechanism enabling spatiotemporally coherent video generation under user-specified camera trajectories. Evaluated on diverse multi-scene, multi-motion benchmarks, our approach achieves superior geometric fidelity, visual naturalness, and immersive quality compared to existing single-stage fusion methods.
š Abstract
3D scene reconstruction and 4D human animation have seen rapid progress and broad adoption in recent years. However, seamlessly integrating reconstructed scenes with 4D human animation to produce visually engaging results remains challenging. One key difficulty lies in placing the human at the correct location and scale within the scene while avoiding unrealistic interpenetration. Another challenge is that the human and the background may exhibit different lighting and style, leading to unrealistic composites. In addition, appealing character motion videos are often accompanied by camera movements, which means that the viewpoints need to be reconstructed along a specified trajectory. We present AnimateScene, which addresses the above issues in a unified framework. First, we design an accurate placement module that automatically determines a plausible 3D position for the human and prevents any interpenetration within the scene during motion. Second, we propose a training-free style alignment method that adapts the 4D human representation to match the background's lighting and style, achieving coherent visual integration. Finally, we design a joint post-reconstruction method for both the 4D human and the 3D scene that allows camera trajectories to be inserted, enabling the final rendered video to feature visually appealing camera movements. Extensive experiments show that AnimateScene generates dynamic scene videos with high geometric detail and spatiotemporal coherence across various camera and action combinations.