🤖 AI Summary
This work addresses the challenge of temporally coherent and topologically constrained intermediate shape interpolation in non-rigid point cloud sequences. We propose an unsupervised neural implicit deformation modeling framework. Our method formulates deformation as a continuous velocity field in Euclidean space, integrates a modified level-set equation for robust surface reconstruction, and incorporates unsupervised physical regularizers—including invertibility and smoothness—alongside geometric constraints to handle topology changes, noise, and partial observations. Crucially, it requires no supervision from intermediate frames and makes no assumptions about structured input or isometric deformation. Experiments demonstrate significant improvements over state-of-the-art methods across multiple degradation scenarios. Notably, our approach achieves, for the first time, super-resolution reconstruction and high-fidelity dynamic mesh generation from low-resolution 4D Kinect sequences.
📝 Abstract
Generating realistic intermediate shapes between non-rigidly deformed shapes is a challenging task in computer vision, especially with unstructured data (e.g., point clouds) where temporal consistency across frames is lacking, and topologies are changing. Most interpolation methods are designed for structured data (i.e., meshes) and do not apply to real-world point clouds. In contrast, our approach, 4Deform, leverages neural implicit representation (NIR) to enable free topology changing shape deformation. Unlike previous mesh-based methods that learn vertex-based deformation fields, our method learns a continuous velocity field in Euclidean space. Thus, it is suitable for less structured data such as point clouds. Additionally, our method does not require intermediate-shape supervision during training; instead, we incorporate physical and geometrical constraints to regularize the velocity field. We reconstruct intermediate surfaces using a modified level-set equation, directly linking our NIR with the velocity field. Experiments show that our method significantly outperforms previous NIR approaches across various scenarios (e.g., noisy, partial, topology-changing, non-isometric shapes) and, for the first time, enables new applications like 4D Kinect sequence upsampling and real-world high-resolution mesh deformation.