🤖 AI Summary
This paper addresses core challenges in 4D dynamic scene modeling—namely, inaccurate motion representation, temporal inconsistency, and physically implausible deformations. To this end, it systematically surveys over 200 radiance field methods and introduces, for the first time, a unified representation framework spanning implicit neural fields (e.g., NeRF) to explicit Gaussian fields (e.g., 3D Gaussian Splatting). The authors propose a multidimensional taxonomy structured along four axes: motion modeling paradigms, auxiliary information integration, temporal consistency constraints, and physical regularization. They further establish novel evaluation criteria—interpretability and temporal stability—to complement traditional metrics. By unifying differentiable volumetric rendering with dynamic regularization techniques, the framework identifies key pathways toward real-time reconstruction, lightweight training, and cross-scene generalization. This work provides the first authoritative classification and roadmap for dynamic radiance field research, setting foundational principles for future advances.
📝 Abstract
Dynamic scene representation and reconstruction have undergone transformative advances in recent years, catalyzed by breakthroughs in neural radiance fields and 3D Gaussian splatting techniques. While initially developed for static environments, these methodologies have rapidly evolved to address the complexities inherent in 4D dynamic scenes through an expansive body of research. Coupled with innovations in differentiable volumetric rendering, these approaches have significantly enhanced the quality of motion representation and dynamic scene reconstruction, thereby garnering substantial attention from the computer vision and graphics communities. This survey presents a systematic analysis of over 200 papers focused on dynamic scene representation using radiance field, spanning the spectrum from implicit neural representations to explicit Gaussian primitives. We categorize and evaluate these works through multiple critical lenses: motion representation paradigms, reconstruction techniques for varied scene dynamics, auxiliary information integration strategies, and regularization approaches that ensure temporal consistency and physical plausibility. We organize diverse methodological approaches under a unified representational framework, concluding with a critical examination of persistent challenges and promising research directions. By providing this comprehensive overview, we aim to establish a definitive reference for researchers entering this rapidly evolving field while offering experienced practitioners a systematic understanding of both conceptual principles and practical frontiers in dynamic scene reconstruction.