🤖 AI Summary
Existing 3D stylization methods are largely confined to static scenes and only transfer appearance (e.g., color and texture), neglecting the geometric structure of style images—leading to geometric distortion and temporal incoherence in dynamic settings. This work introduces the first joint appearance-and-geometry stylization framework for dynamic Neural Radiance Fields (NeRFs). We explicitly encode geometric priors from style images via depth maps and integrate them into dynamic radiance field optimization. A multi-stage geometric constraint mechanism is designed to enable synergistic geometry–appearance transfer while ensuring motion consistency across frames. Evaluated on both synthetic and real-world dynamic datasets, our method achieves significant improvements in style fidelity, geometric detail reconstruction accuracy, and temporal coherence, surpassing state-of-the-art approaches in visual quality.
📝 Abstract
Current 3D stylization techniques primarily focus on static scenes, while our world is inherently dynamic, filled with moving objects and changing environments. Existing style transfer methods primarily target appearance -- such as color and texture transformation -- but often neglect the geometric characteristics of the style image, which are crucial for achieving a complete and coherent stylization effect. To overcome these shortcomings, we propose GAS-NeRF, a novel approach for joint appearance and geometry stylization in dynamic Radiance Fields. Our method leverages depth maps to extract and transfer geometric details into the radiance field, followed by appearance transfer. Experimental results on synthetic and real-world datasets demonstrate that our approach significantly enhances the stylization quality while maintaining temporal coherence in dynamic scenes.