🤖 AI Summary
Existing vision-language models (VLMs) for SVG generation produce only text tokens, lacking visual feedback—leading to semantic misalignment, geometric incoherence, and poor visual appeal. To address this, we propose the first end-to-end unified multimodal framework that jointly generates both image tokens and SVG tokens. Our method introduces an internal visual guidance mechanism that dynamically calibrates SVG decoding during inference using the model’s own image predictions. We further design a joint training paradigm for image and SVG generation, incorporate cross-modal token alignment, and adopt a test-time visual-guided scaling strategy. Evaluated across multiple metrics, our approach significantly outperforms state-of-the-art methods: SVG outputs exhibit marked improvements in visual fidelity, semantic alignment, and syntactic correctness. The framework demonstrates strong generalization across diverse applications—including icons, charts, and UI elements—establishing a new foundation for visually grounded vector graphic synthesis.
📝 Abstract
Recent vision-language model (VLM)-based approaches have achieved impressive results on SVG generation. However, because they generate only text and lack visual signals during decoding, they often struggle with complex semantics and fail to produce visually appealing or geometrically coherent SVGs. We introduce DuetSVG, a unified multimodal model that jointly generates image tokens and corresponding SVG tokens in an end-to-end manner. DuetSVG is trained on both image and SVG datasets. At inference, we apply a novel test-time scaling strategy that leverages the model's native visual predictions as guidance to improve SVG decoding quality. Extensive experiments show that our method outperforms existing methods, producing visually faithful, semantically aligned, and syntactically clean SVGs across a wide range of applications.