Visually Descriptive Language Model for Vector Graphics Reasoning

📅 2024-04-09
📈 Citations: 4
✨ Influential: 0
📄 PDF
🤖 AI Summary
Large multimodal models (LMMs) suffer from a semantic gap between low-level visual perception (e.g., geometric attribute recognition) and high-level language reasoning when processing vector graphics (SVG). Method: We propose the Visual Description Language Model (VDLM), introducing Primal Visual Descriptions (PVDs)—a novel, learnable intermediate representation that automatically encodes SVG’s geometric structure into structured, interpretable text sequences, thereby decoupling visual perception from semantic reasoning. VDLM is trained via synthetic-data-driven self-supervision, requiring no human annotations, and supports zero-shot cross-task generalization. Contribution/Results: Experiments demonstrate that VDLM significantly outperforms state-of-the-art LMMs (e.g., GPT-4o) on diverse SVG perception and reasoning benchmarks. PVD quality strongly correlates with task performance, markedly enhancing model interpretability, generalization capability, and robustness.

Technology Category

Application Category

📝 Abstract
Despite significant advancements, large multimodal models (LMMs) still struggle to bridge the gap between low-level visual perception -- focusing on shapes, sizes, and layouts -- and high-level language reasoning, such as semantics and logic. This limitation is evident in tasks that require precise visual perception, like comparing geometric properties or solving visual reasoning problems. To study this failure mode, we focus on vector graphics -- images composed of 2D objects and shapes, prevalent in LMM-based tasks in web, design, and OS environments. We identify two key research questions: how can we enable precise visual perception, and how can we facilitate high-level reasoning based on such low-level perceptions? To capture fine visual details, we use Scalable Vector Graphics (SVG) for accurate encoding of visual scenes. However, SVGs are not readily interpretable by LMMs in a zero-shot manner. To tackle this, we propose the Visually Descriptive Language Model (VDLM), which introduces a Primal Visual Description (PVD) as an intermediate textual representation. PVD translates SVGs into a text-based abstraction consisting of primitive attributes (e.g., shape, position, measurement) and their corresponding values. PVD can be learned using task-agnostic synthesized data and represents visual primitives that are universal across vector graphics. This abstraction is more structured, allowing for direct interpretation by foundation models for zero-shot generalization. Without human-annotated data, empirical results show that VDLM significantly improves state-of-the-art LMMs like GPT-4o on various multimodal perception and reasoning tasks. Extensive analyses of VDLM show improved interpretability due to its disentangled perception and reasoning. We also demonstrate a positive correlation between PVD quality and task performance. Project page: https://mikewangwzhl.github.io/VDLM/
Problem

Research questions and friction points this paper is trying to address.

Bridge gap between low-level visual perception and high-level language reasoning
Enable precise visual perception for vector graphics tasks
Facilitate high-level reasoning based on low-level visual details
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses SVG for precise visual encoding
Introduces Primal Visual Description (PVD)
Enables zero-shot generalization in LMMs
🔎 Similar Papers
No similar papers found.