🤖 AI Summary
This work addresses the challenge of accurately rendering complex text and mathematical expressions in text-to-image generation under out-of-distribution prompts. The authors propose a training-free agent-based workflow that, for the first time, integrates glyph template injection with latent space manipulation and attention map modulation to achieve high-fidelity text rendering through iterative refinement. The method is highly versatile and compatible as a plug-and-play module with multiple mainstream text-to-image models without requiring any additional training. Evaluated on a newly curated benchmark for complex character and formula rendering, the approach significantly outperforms existing methods. The implementation code has been made publicly available.
📝 Abstract
Despite recent advances in generative models driving significant progress in text rendering, accurately generating complex text and mathematical formulas remains a formidable challenge. This difficulty primarily stems from the limited instruction-following capabilities of current models when encountering out-of-distribution prompts. To address this, we introduce GlyphBanana, alongside a corresponding benchmark specifically designed for rendering complex characters and formulas. GlyphBanana employs an agentic workflow that integrates auxiliary tools to inject glyph templates into both the latent space and attention maps, facilitating the iterative refinement of generated images. Notably, our training-free approach can be seamlessly applied to various Text-to-Image (T2I) models, achieving superior precision compared to existing baselines. Extensive experiments demonstrate the effectiveness of our proposed workflow. Associated code is publicly available at https://github.com/yuriYanZeXuan/GlyphBanana.