🤖 AI Summary
Existing large 3D reconstruction models (LRMs) struggle to support fine-grained, text-driven editing—such as color, material, or geometric texture manipulation—and typically rely on paired training data or costly per-instance fine-tuning.
Method: We propose the first unified, single-stage framework for joint 3D generation and editing, built upon triplane implicit representations. Our approach introduces a text-conditioned diffusion adapter that operates end-to-end in latent space to modify geometrically consistent 3D structures—requiring neither 3D annotations nor paired supervision. By integrating Transformer-based architecture with diffusion priors, it enables efficient and controllable text-to-3D editing.
Contribution/Results: Evaluated on Objaverse LVIS, our method significantly outperforms two-stage baselines in both quantitative metrics and qualitative assessments, demonstrating superior instruction alignment and material-geometry consistency.
📝 Abstract
Transformer based methods have enabled users to create, modify, and comprehend text and image data. Recently proposed Large Reconstruction Models (LRMs) further extend this by providing the ability to generate high-quality 3D models with the help of a single object image. These models, however, lack the ability to manipulate or edit the finer details, such as adding standard design patterns or changing the color and reflectance of the generated objects, thus lacking fine-grained control that may be very helpful in domains such as augmented reality, animation and gaming. Naively training LRMs for this purpose would require generating precisely edited images and 3D object pairs, which is computationally expensive. In this paper, we propose Instructive3D, a novel LRM based model that integrates generation and fine-grained editing, through user text prompts, of 3D objects into a single model. We accomplish this by adding an adapter that performs a diffusion process conditioned on a text prompt specifying edits in the triplane latent space representation of 3D object models. Our method does not require the generation of edited 3D objects. Additionally, Instructive3D allows us to perform geometrically consistent modifications, as the edits done through user-defined text prompts are applied to the triplane latent representation thus enhancing the versatility and precision of 3D objects generated. We compare the objects generated by Instructive3D and a baseline that first generates the 3D object meshes using a standard LRM model and then edits these 3D objects using text prompts when images are provided from the Objaverse LVIS dataset. We find that Instructive3D produces qualitatively superior 3D objects with the properties specified by the edit prompts.