Align 3D Representation and Text Embedding for 3D Content Personalization

📅 2025-08-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D personalization methods rely on computationally expensive retraining or struggle to transfer 2D vision-language models (e.g., CLIP) to the 3D domain. To address this, we propose Invert3D—a novel framework that introduces, for the first time, a camera-conditioned 3D-to-text inverse mapping mechanism. It constructs a differentiable 3D embedding space directly over neural radiance fields (NeRF) and 3D Gaussian splatting (3DGS) representations, and aligns it with the CLIP text embedding space—without any fine-tuning. This enables efficient, high-fidelity, natural language–driven 3D editing. Experiments demonstrate Invert3D’s strong generalization across diverse 3D scenes and superior editing fidelity, while drastically reducing computational overhead for personalized 3D content generation. Our work establishes a new paradigm for language-guided 3D generation and editing.

Technology Category

Application Category

📝 Abstract
Recent advances in NeRF and 3DGS have significantly enhanced the efficiency and quality of 3D content synthesis. However, efficient personalization of generated 3D content remains a critical challenge. Current 3D personalization approaches predominantly rely on knowledge distillation-based methods, which require computationally expensive retraining procedures. To address this challenge, we propose extbf{Invert3D}, a novel framework for convenient 3D content personalization. Nowadays, vision-language models such as CLIP enable direct image personalization through aligned vision-text embedding spaces. However, the inherent structural differences between 3D content and 2D images preclude direct application of these techniques to 3D personalization. Our approach bridges this gap by establishing alignment between 3D representations and text embedding spaces. Specifically, we develop a camera-conditioned 3D-to-text inverse mechanism that projects 3D contents into a 3D embedding aligned with text embeddings. This alignment enables efficient manipulation and personalization of 3D content through natural language prompts, eliminating the need for computationally retraining procedures. Extensive experiments demonstrate that Invert3D achieves effective personalization of 3D content. Our work is available at: https://github.com/qsong2001/Invert3D.
Problem

Research questions and friction points this paper is trying to address.

Aligning 3D representations with text embedding spaces
Enabling efficient 3D content personalization via language
Eliminating computationally expensive retraining procedures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns 3D representations with text embeddings
Projects 3D content into text-aligned embedding space
Enables language-based 3D personalization without retraining
🔎 Similar Papers
Q
Qi Song
Department of Computer Science, Hong Kong Baptist University, Hong Kong SAR, China
Z
Ziyuan Luo
Department of Computer Science, Hong Kong Baptist University, Hong Kong SAR, China
K
Ka Chun Cheung
NVIDIA AI Technology Center, NVIDIA, Hong Kong SAR, China
Simon See
Simon See
nvidia
applied mathematicsAImachine learningHigh Performance ComputingSimulation
Renjie Wan
Renjie Wan
Department of Computer Science, Hong Kong Baptist University
Digital WatermarkingAI SecurityImage Processing