ShapeUP: Scalable Image-Conditioned 3D Editing

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D editing methods struggle to simultaneously achieve visual controllability, geometric consistency, and scalability, often suffering from slow inference, visual drift, or reliance on fixed priors. This work proposes an image-conditioned 3D editing framework that formulates editing as a supervised latent-to-latent transformation within a native 3D representation, leveraging pretrained 3D foundation models for fine-grained control. Notably, it enables the first mask-free, implicitly localized editing approach, preserving structural consistency while overcoming the scalability limitations of prior training-free methods. By supervising a 3D diffusion Transformer (DiT) on triplets of source 3D shapes, edited 2D images, and target 3D shapes, the method outperforms both trainable and training-free baselines in identity preservation and editing fidelity, demonstrating efficient, robust, and scalable 3D content editing capabilities.

Technology Category

Application Category

📝 Abstract
Recent advancements in 3D foundation models have enabled the generation of high-fidelity assets, yet precise 3D manipulation remains a significant challenge. Existing 3D editing frameworks often face a difficult trade-off between visual controllability, geometric consistency, and scalability. Specifically, optimization-based methods are prohibitively slow, multi-view 2D propagation techniques suffer from visual drift, and training-free latent manipulation methods are inherently bound by frozen priors and cannot directly benefit from scaling. In this work, we present ShapeUP, a scalable, image-conditioned 3D editing framework that formulates editing as a supervised latent-to-latent translation within a native 3D representation. This formulation allows ShapeUP to build on a pretrained 3D foundation model, leveraging its strong generative prior while adapting it to editing through supervised training. In practice, ShapeUP is trained on triplets consisting of a source 3D shape, an edited 2D image, and the corresponding edited 3D shape, and learns a direct mapping using a 3D Diffusion Transformer (DiT). This image-as-prompt approach enables fine-grained visual control over both local and global edits and achieves implicit, mask-free localization, while maintaining strict structural consistency with the original asset. Our extensive evaluations demonstrate that ShapeUP consistently outperforms current trained and training-free baselines in both identity preservation and edit fidelity, offering a robust and scalable paradigm for native 3D content creation.
Problem

Research questions and friction points this paper is trying to address.

3D editing
image-conditioned
geometric consistency
scalability
visual controllability
Innovation

Methods, ideas, or system contributions that make the work stand out.

image-conditioned 3D editing
latent-to-latent translation
3D Diffusion Transformer
scalable 3D manipulation
native 3D representation
🔎 Similar Papers
No similar papers found.