🤖 AI Summary
Current single-image- or text-driven 3D generation methods lack component-level controllability, necessitating full-model re-synthesis for local edits. To address this, we propose the first conditional multi-view diffusion framework enabling controllable 3D generation and pixel-accurate local editing from a single input image—decoupling generation from editing. Our method integrates multi-view diffusion modeling, conditional latent-space guidance, cross-view consistency constraints, and a single-image-driven 3D component-level editing mechanism. It supports semantic-part-conditioned generation and modification without global re-synthesis. Experiments demonstrate significant improvements in part generation quality and editing fidelity: editing a single rendered view precisely updates the corresponding 3D region, achieving over 3× higher editing efficiency compared to end-to-end baselines. This breakthrough overcomes the long-standing bottleneck of fine-grained control in end-to-end 3D generative modeling.
📝 Abstract
Recently, 3D generation methods have shown their powerful ability to automate 3D model creation. However, most 3D generation methods only rely on an input image or a text prompt to generate a 3D model, which lacks the control of each component of the generated 3D model. Any modifications of the input image lead to an entire regeneration of the 3D models. In this paper, we introduce a new method called CMD that generates a 3D model from an input image while enabling flexible local editing of each component of the 3D model. In CMD, we formulate the 3D generation as a conditional multiview diffusion model, which takes the existing or known parts as conditions and generates the edited or added components. This conditional multiview diffusion model not only allows the generation of 3D models part by part but also enables local editing of 3D models according to the local revision of the input image without changing other 3D parts. Extensive experiments are conducted to demonstrate that CMD decomposes a complex 3D generation task into multiple components, improving the generation quality. Meanwhile, CMD enables efficient and flexible local editing of a 3D model by just editing one rendered image.