🤖 AI Summary
Existing 3D personalization methods rely on computationally expensive retraining or struggle to transfer 2D vision-language models (e.g., CLIP) to the 3D domain. To address this, we propose Invert3D—a novel framework that introduces, for the first time, a camera-conditioned 3D-to-text inverse mapping mechanism. It constructs a differentiable 3D embedding space directly over neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS) representations, and aligns it with the CLIP text embedding space—without any fine-tuning. This enables efficient, high-fidelity, natural language–driven 3D editing. Experiments demonstrate Invert3D’s strong generalization across diverse 3D scenes and superior editing fidelity, while drastically reducing computational overhead for personalized 3D content generation. Our work establishes a new paradigm for language-guided 3D generation and editing.
📝 Abstract
Recent advances in NeRF and 3DGS have significantly enhanced the efficiency and quality of 3D content synthesis. However, efficient personalization of generated 3D content remains a critical challenge. Current 3D personalization approaches predominantly rely on knowledge distillation-based methods, which require computationally expensive retraining procedures. To address this challenge, we propose extbf{Invert3D}, a novel framework for convenient 3D content personalization. Nowadays, vision-language models such as CLIP enable direct image personalization through aligned vision-text embedding spaces. However, the inherent structural differences between 3D content and 2D images preclude direct application of these techniques to 3D personalization. Our approach bridges this gap by establishing alignment between 3D representations and text embedding spaces. Specifically, we develop a camera-conditioned 3D-to-text inverse mechanism that projects 3D contents into a 3D embedding aligned with text embeddings. This alignment enables efficient manipulation and personalization of 3D content through natural language prompts, eliminating the need for computationally retraining procedures. Extensive experiments demonstrate that Invert3D achieves effective personalization of 3D content. Our work is available at: https://github.com/qsong2001/Invert3D.