🤖 AI Summary
To address efficient low-dimensional encoding and generation of high-dimensional 3D meshes (e.g., 15k-vertex meshes) under limited training samples and GPU resources, this paper proposes SpoDify—a training-free spectral-domain diffusion framework. Methodologically, SpoDify integrates singular value decomposition (SVD) directly into the diffusion process: it computes the Laplacian eigenbasis, applies SVD to the resulting spectral matrix, and retains only the top-512 singular values as a fixed-length latent code; diffusion is then performed exclusively in this spectral domain, eliminating the need for a learnable autoencoder. The approach achieves zero-shot training, minimal GPU memory footprint, and strong few-shot generalization. Experiments demonstrate that the 512-dimensional spectral code enables lossless, high-fidelity mesh reconstruction; generative quality matches state-of-the-art methods, while encoding efficiency is substantially improved.
📝 Abstract
Recent advancements in learning latent codes derived from high-dimensional shapes have demonstrated impressive outcomes in 3D generative modeling. Traditionally, these approaches employ a trained autoencoder to acquire a continuous implicit representation of source shapes, which can be computationally expensive. This paper introduces a novel framework, spectral-domain diffusion for high-quality shape generation SpoDify, that utilizes singular value decomposition (SVD) for shape encoding. The resulting eigenvectors can be stored for subsequent decoding, while generative modeling is performed on the eigenfeatures. This approach efficiently encodes complex meshes into continuous implicit representations, such as encoding a 15k-vertex mesh to a 512-dimensional latent code without learning. Our method exhibits significant advantages in scenarios with limited samples or GPU resources. In mesh generation tasks, our approach produces high-quality shapes that are comparable to state-of-the-art methods.