🤖 AI Summary
This work addresses the challenge of extracting reusable geometric and appearance attributes from a reference shape and combining them with textual descriptions to generate personalized 3D shapes. To this end, the authors propose a region-level concept learning mechanism that disentangles geometry and appearance, enabling fine-grained, context-aware attribute extraction and composition through a progressive optimization strategy. Notably, the approach operates without requiring explicit contextual supervision and supports flexible cross-category transfer. Experimental results demonstrate that the proposed framework efficiently synthesizes high-quality, semantically consistent personalized 3D models, significantly outperforming existing methods in cross-category generation tasks.
📝 Abstract
We present PEGAsus, a new framework capable of generating Personalized 3D shapes by learning shape concepts at both Geometry and Appearance levels. First, we formulate 3D shape personalization as extracting reusable, category-agnostic geometric and appearance attributes from reference shapes, and composing these attributes with text to generate novel shapes. Second, we design a progressive optimization strategy to learn shape concepts at both the geometry and appearance levels, decoupling the shape concept learning process. Third, we extend our approach to region-wise concept learning, enabling flexible concept extraction, with context-aware and context-free losses. Extensive experimental results show that PEGAsus is able to effectively extract attributes from a wide range of reference shapes and then flexibly compose these concepts with text to synthesize new shapes. This enables fine-grained control over shape generation and supports the creation of diverse, personalized results, even in challenging cross-category scenarios. Both quantitative and qualitative experiments demonstrate that our approach outperforms existing state-of-the-art solutions.