🤖 AI Summary
This work introduces the first end-to-end method for generating vector displacement maps (VDMs) from a single input image, enabling artists to seamlessly embed and edit fine geometric details on 3D model surfaces. The approach first estimates multi-view normal maps from the image, then employs a differentiable, normal-constrained reconstruction pipeline to invert them into attachable and editable geometric stamps. Key contributions include: (1) formalizing and implementing the VDM generation paradigm; (2) proposing the first fully automatic algorithm for extracting VDMs from 3D objects; (3) releasing the first open-source academic VDM dataset; and (4) focusing on modeling locally embeddable geometric components—distinct from holistic shape generation. Experiments demonstrate significant improvements over existing baselines in geometric accuracy, editability, and industrial compatibility. The method supports interactive customization and iterative re-editing, and has been successfully integrated into mainstream 3D modeling pipelines.
📝 Abstract
We introduce the first method for generating Vector Displacement Maps (VDMs): parameterized, detailed geometric stamps commonly used in 3D modeling. Given a single input image, our method first generates multi-view normal maps and then reconstructs a VDM from the normals via a novel reconstruction pipeline. We also propose an efficient algorithm for extracting VDMs from 3D objects, and present the first academic VDM dataset. Compared to existing 3D generative models focusing on complete shapes, we focus on generating parts that can be seamlessly attached to shape surfaces. The method gives artists rich control over adding geometric details to a 3D shape. Experiments demonstrate that our approach outperforms existing baselines. Generating VDMs offers additional benefits, such as using 2D image editing to customize and refine 3D details.