SmolDocling: An ultra-compact vision-language model for end-to-end multi-modal document conversion

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of jointly modeling content, structure, and spatial layout in end-to-end multimodal document parsing. We propose a lightweight (256M-parameter) vision-language joint Transformer architecture that enables direct, page-level conversion of scanned documents into DocTags—a novel, coordinate-annotated universal markup language. Our contributions include: (1) DocTags, a unified tokenization scheme supporting heterogeneous elements (text, tables, mathematical formulae, code blocks, and figures); (2) layout-aware positional encoding and multi-granularity decoding; and (3) the first open-source, domain-specific annotation dataset covering tables, figures, formulae, and code. Experiments demonstrate competitive performance against a 27× larger vision-language model across diverse document types, with substantially reduced computational overhead. The model is publicly released; the dataset will be released shortly.

Technology Category

Application Category

📝 Abstract
We introduce SmolDocling, an ultra-compact vision-language model targeting end-to-end document conversion. Our model comprehensively processes entire pages by generating DocTags, a new universal markup format that captures all page elements in their full context with location. Unlike existing approaches that rely on large foundational models, or ensemble solutions that rely on handcrafted pipelines of multiple specialized models, SmolDocling offers an end-to-end conversion for accurately capturing content, structure and spatial location of document elements in a 256M parameters vision-language model. SmolDocling exhibits robust performance in correctly reproducing document features such as code listings, tables, equations, charts, lists, and more across a diverse range of document types including business documents, academic papers, technical reports, patents, and forms -- significantly extending beyond the commonly observed focus on scientific papers. Additionally, we contribute novel publicly sourced datasets for charts, tables, equations, and code recognition. Experimental results demonstrate that SmolDocling competes with other Vision Language Models that are up to 27 times larger in size, while reducing computational requirements substantially. The model is currently available, datasets will be publicly available soon.
Problem

Research questions and friction points this paper is trying to address.

End-to-end multi-modal document conversion using a compact model.
Accurate capture of content, structure, and spatial location in documents.
Robust performance across diverse document types with reduced computational needs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ultra-compact vision-language model for document conversion
Generates DocTags for comprehensive page element capture
Competes with larger models using 256M parameters
🔎 Similar Papers