🤖 AI Summary
Traditional OCR systems focus solely on text recognition and struggle to interpret graphical elements such as charts and tables, resulting in significant loss of semantic information in document understanding. This work proposes dots.mocr, a novel approach that treats graphical components as first-class parsing targets alongside text, enabling unified modeling and end-to-end generation of structured textual representations for multimodal documents. Leveraging a large-scale data engine built from PDFs, web pages, and SVGs, the method employs staged pretraining followed by supervised fine-tuning to train a 3-billion-parameter model. Evaluated on the olmOCR Bench, dots.mocr achieves a new state-of-the-art score of 83.9 and ranks second only to Gemini 3 Pro in the OCR Arena, while notably surpassing Gemini 3 Pro in the quality of generated SVG outputs from graphical content.
📝 Abstract
We present Multimodal OCR (MOCR), a document parsing paradigm that jointly parses text and graphics into unified textual representations. Unlike conventional OCR systems that focus on text recognition and leave graphical regions as cropped pixels, our method, termed dots.mocr, treats visual elements such as charts, diagrams, tables, and icons as first-class parsing targets, enabling systems to parse documents while preserving semantic relationships across elements. It offers several advantages: (1) it reconstructs both text and graphics as structured outputs, enabling more faithful document reconstruction; (2) it supports end-to-end training over heterogeneous document elements, allowing models to exploit semantic relations between textual and visual components; and (3) it converts previously discarded graphics into reusable code-level supervision, unlocking multimodal supervision embedded in existing documents. To make this paradigm practical at scale, we build a comprehensive data engine from PDFs, rendered webpages, and native SVG assets, and train a compact 3B-parameter model through staged pretraining and supervised fine-tuning. We evaluate dots.mocr from two perspectives: document parsing and structured graphics parsing. On document parsing benchmarks, it ranks second only to Gemini 3 Pro on our OCR Arena Elo leaderboard, surpasses existing open-source document parsing systems, and sets a new state of the art of 83.9 on olmOCR Bench. On structured graphics parsing, dots.mocr achieves higher reconstruction quality than Gemini 3 Pro across image-to-SVG benchmarks, demonstrating strong performance on charts, UI layouts, scientific figures, and chemical diagrams. These results show a scalable path toward building large-scale image-to-code corpora for multimodal pretraining. Code and models are publicly available at https://github.com/rednote-hilab/dots.mocr.