🤖 AI Summary
Existing text-guided texture generation methods suffer from semantic ambiguity and the “Janus problem,” leading to globally incoherent structures and missing high-frequency details—compromising visual clarity and holistic perceptual quality. To address these issues, we propose a vision-guided diffusion framework featuring two key components: (1) a vision-guided enhancement module that fuses image-conditioned encodings to enforce semantic consistency, and (2) a direction-aware adaptation module that automatically generates directional prompts and models multi-view camera poses to mitigate textual ambiguity. Our approach preserves fine-grained text controllability while significantly improving structural integrity and detail fidelity of generated textures. Quantitative evaluation shows superior performance over state-of-the-art methods on FID, LPIPS, and user study metrics. Notably, our method achieves enhanced visual fidelity and spatial coherence—especially for texture synthesis on complex geometric surfaces.
📝 Abstract
Recent texture generation methods achieve impressive results due to the powerful generative prior they leverage from large-scale text-to-image diffusion models. However, abstract textual prompts are limited in providing global textural or shape information, which results in the texture generation methods producing blurry or inconsistent patterns. To tackle this, we present FlexiTex, embedding rich information via visual guidance to generate a high-quality texture. The core of FlexiTex is the Visual Guidance Enhancement module, which incorporates more specific information from visual guidance to reduce ambiguity in the text prompt and preserve high-frequency details. To further enhance the visual guidance, we introduce a Direction-Aware Adaptation module that automatically designs direction prompts based on different camera poses, avoiding the Janus problem and maintaining semantically global consistency. Benefiting from the visual guidance, FlexiTex produces quantitatively and qualitatively sound results, demonstrating its potential to advance texture generation for real-world applications.