🤖 AI Summary
Existing 2D unsupervised object-centric representation learning methods struggle to capture intrinsic 3D geometry and motion in dynamic scenes. To address this, we propose DynaVol-S—the first joint optimization framework integrating object-centric voxelization with canonical-space deformation, driven by differentiable volumetric rendering of compositional Neural Radiance Fields (NeRFs). It enables disentangled learning of geometry, semantics, and motion within a 3D voxel space. Crucially, DynaVol-S supports native 3D operations—including geometric editing and trajectory manipulation—surpassing the inherent limitations of 2D approaches. Quantitatively, it achieves state-of-the-art performance on novel-view synthesis and unsupervised object decomposition. Moreover, it demonstrates strong robustness and generalization on real-world dynamic scenes involving complex object interactions.
📝 Abstract
Learning object-centric representations from unsupervised videos is challenging. Unlike most previous approaches that focus on decomposing 2D images, we present a 3D generative model named DynaVol-S for dynamic scenes that enables object-centric learning within a differentiable volume rendering framework. The key idea is to perform object-centric voxelization to capture the 3D nature of the scene, which infers per-object occupancy probabilities at individual spatial locations. These voxel features evolve through a canonical-space deformation function and are optimized in an inverse rendering pipeline with a compositional NeRF. Additionally, our approach integrates 2D semantic features to create 3D semantic grids, representing the scene through multiple disentangled voxel grids. DynaVol-S significantly outperforms existing models in both novel view synthesis and unsupervised decomposition tasks for dynamic scenes. By jointly considering geometric structures and semantic features, it effectively addresses challenging real-world scenarios involving complex object interactions. Furthermore, once trained, the explicitly meaningful voxel features enable additional capabilities that 2D scene decomposition methods cannot achieve, such as novel scene generation through editing geometric shapes or manipulating the motion trajectories of objects.