VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D Space

πŸ“… 2025-08-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address poor consistency in unedited regions and structural distortions arising from multi-view image reconstruction in 3D local editing, this paper proposes the first training-free, native 3D latent-space editing method. Our approach leverages 3D latent inversion, context-aware denoising feature replacement, and a key-value caching mechanism to achieve precise control over edited regions while preserving geometric and appearance integrity in unedited areas. We introduce a novel inversion trajectory prediction strategy and establish Edit3D-Benchβ€”the first manually annotated 3D editing benchmark for quantitative evaluation. Experiments demonstrate that our method significantly outperforms existing approaches in editing consistency, structural coherence, and generation fidelity. It enables high-quality paired data synthesis and context-aware 3D content editing without requiring task-specific training or external 2D priors.

Technology Category

Application Category

πŸ“ Abstract
3D local editing of specified regions is crucial for game industry and robot interaction. Recent methods typically edit rendered multi-view images and then reconstruct 3D models, but they face challenges in precisely preserving unedited regions and overall coherence. Inspired by structured 3D generative models, we propose VoxHammer, a novel training-free approach that performs precise and coherent editing in 3D latent space. Given a 3D model, VoxHammer first predicts its inversion trajectory and obtains its inverted latents and key-value tokens at each timestep. Subsequently, in the denoising and editing phase, we replace the denoising features of preserved regions with the corresponding inverted latents and cached key-value tokens. By retaining these contextual features, this approach ensures consistent reconstruction of preserved areas and coherent integration of edited parts. To evaluate the consistency of preserved regions, we constructed Edit3D-Bench, a human-annotated dataset comprising hundreds of samples, each with carefully labeled 3D editing regions. Experiments demonstrate that VoxHammer significantly outperforms existing methods in terms of both 3D consistency of preserved regions and overall quality. Our method holds promise for synthesizing high-quality edited paired data, thereby laying the data foundation for in-context 3D generation. See our project page at https://huanngzh.github.io/VoxHammer-Page/.
Problem

Research questions and friction points this paper is trying to address.

Achieving precise 3D local editing in specified regions
Preserving unedited areas during 3D model reconstruction
Ensuring overall coherence in 3D editing integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free 3D editing in latent space
Replaces denoising features with inverted tokens
Ensures coherence through contextual feature retention
L
Lin Li
Renmin University of China
Zehuan Huang
Zehuan Huang
Beihang University
Generative ModelComputer Vision
Haoran Feng
Haoran Feng
Tsinghua University
Computer vision
G
Gengxiong Zhuang
Beihang University
R
Rui Chen
Beihang University
C
Chunchao Guo
Tencent Hunyuan
Lu Sheng
Lu Sheng
School of Software, Beihang University
Embodied AI3D VisionMachine Learning