🤖 AI Summary
NeRF’s implicit representation hinders efficient 3D editing. To address this, we propose NeRF-Editor, which explicitly models the 3D color field as an editable 2D canonical image and jointly learns a geometrically consistent regularized projection field that maps 3D points to 2D pixels naturally. Our key contributions are: (1) the first decoupling of 3D editing into an editable canonical image and a learnable projection field; (2) offset regularization to ensure bijective, geometry-preserving, and photorealistic projections; and (3) support for real-time, multimodal editing—including scribbling, segmentation, and stylization. Methodologically, NeRF-Editor employs an implicit-explicit hybrid representation, pseudo-canonical camera initialization, and end-to-end joint optimization. Extensive evaluation across multiple datasets demonstrates that NeRF-Editor achieves ≥20× speedup over state-of-the-art NeRF editing methods per edit operation, while maintaining high-fidelity rendering and strong 3D geometric consistency.
📝 Abstract
Neural radiance fields, which represent a 3D scene as a color field and a density field, have demonstrated great progress in novel view synthesis yet are unfavorable for editing due to the implicitness. This work studies the task of efficient 3D editing, where we focus on editing speed and user interactivity. To this end, we propose to learn the color field as an explicit 2D appearance aggregation, also called canonical image, with which users can easily customize their 3D editing via 2D image processing. We complement the canonical image with a projection field that maps 3D points onto 2D pixels for texture query. This field is initialized with a pseudo canonical camera model and optimized with offset regularity to ensure the naturalness of the canonical image. Extensive experiments on different datasets suggest that our representation, dubbed AGAP, well supports various ways of 3D editing (e.g., stylization, instance segmentation, and interactive drawing). Our approach demonstrates remarkable efficiency by being at least 20 times faster per edit compared to existing NeRF-based editing methods. Project page is available at https://felixcheng97.github.io/AGAP/.