🤖 AI Summary
Existing virtual try-on systems often rely on predefined templates, limiting user-driven fine-grained personalization and resulting in unnatural integration between 2D-generated accessories and 3D avatars. To address these limitations, this work proposes the first framework that seamlessly integrates controllable 2D generative adversarial networks (GANs) with 3D Gaussian blendshapes-based avatars, enabling end-to-end generation and rendering of personalized eyewear. The method allows users to interactively and precisely customize eyewear appearance in virtual reality with real-time feedback, while ensuring high-fidelity embedding into 3D facial models. This integration significantly enhances both the realism and interactivity of virtual try-on experiences, setting a new standard for personalized digital fashion in immersive environments.
📝 Abstract
Virtual try-on systems allow users to interactively try different products within VR scenarios. However, most existing VTON methods operate only on predefined eyewear templates and lack support for fine-grained, user-driven customization. While GlassesGAN enables personalized 2D eyewear design, its capability remains limited to 2D image generation. Motivated by the success of 3D Gaussian Blendshapes in head reconstruction, we integrate these two techniques and propose GlassesGB, a framework that supports customizable eyewear generation for 3D head avatars. GlassesGB effectively bridges 2D generative customization with 3D head avatar rendering, addressing the challenge in achieving personalized eyewear design for VR applications. The implementation code is available at https://ruiyangju.github.io/GlassesGB.