DuetSVG: Unified Multimodal SVG Generation with Internal Visual Guidance

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models (VLMs) for SVG generation produce only text tokens, lacking visual feedback—leading to semantic misalignment, geometric incoherence, and poor visual appeal. To address this, we propose the first end-to-end unified multimodal framework that jointly generates both image tokens and SVG tokens. Our method introduces an internal visual guidance mechanism that dynamically calibrates SVG decoding during inference using the model’s own image predictions. We further design a joint training paradigm for image and SVG generation, incorporate cross-modal token alignment, and adopt a test-time visual-guided scaling strategy. Evaluated across multiple metrics, our approach significantly outperforms state-of-the-art methods: SVG outputs exhibit marked improvements in visual fidelity, semantic alignment, and syntactic correctness. The framework demonstrates strong generalization across diverse applications—including icons, charts, and UI elements—establishing a new foundation for visually grounded vector graphic synthesis.

Technology Category

Application Category

📝 Abstract
Recent vision-language model (VLM)-based approaches have achieved impressive results on SVG generation. However, because they generate only text and lack visual signals during decoding, they often struggle with complex semantics and fail to produce visually appealing or geometrically coherent SVGs. We introduce DuetSVG, a unified multimodal model that jointly generates image tokens and corresponding SVG tokens in an end-to-end manner. DuetSVG is trained on both image and SVG datasets. At inference, we apply a novel test-time scaling strategy that leverages the model's native visual predictions as guidance to improve SVG decoding quality. Extensive experiments show that our method outperforms existing methods, producing visually faithful, semantically aligned, and syntactically clean SVGs across a wide range of applications.
Problem

Research questions and friction points this paper is trying to address.

Generates SVGs with visual guidance for better quality
Improves semantic and geometric coherence in SVG outputs
Unifies image and SVG token generation end-to-end
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified multimodal model generating image and SVG tokens
End-to-end training on both image and SVG datasets
Test-time scaling using visual predictions to guide decoding
🔎 Similar Papers
No similar papers found.
P
Peiying Zhang
City University of Hong Kong
N
Nanxuan Zhao
Adobe Research
Matthew Fisher
Matthew Fisher
Principal Research Scientist, Adobe Research
Computer GraphicsMachine Learning
Y
Yiran Xu
Adobe Research
J
Jing Liao
City University of Hong Kong
Difan Liu
Difan Liu
Research Scientist, Adobe Research
Computer VisionComputer Graphics