🤖 AI Summary
This work addresses the underexplored challenge of poor vertical Japanese text understanding in multimodal large language models (MLLMs). We present the first systematic evaluation of mainstream MLLMs on vertical Japanese OCR—a critical yet neglected task in document understanding. To this end, we introduce a high-quality Japanese OCR dataset featuring both synthetic renderings and real-world images, with dual horizontal and vertical layout annotations. We further propose a novel synthetic data generation strategy specifically designed for vertical text and conduct targeted fine-tuning of MLLMs using this data. Experimental results reveal that existing MLLMs underperform significantly on vertical layouts compared to horizontal ones; after fine-tuning, key metrics—including character accuracy and layout-aware recognition precision—improve by an average of 23.6%. All resources—including the dataset, training code, and benchmarking framework—are publicly released to advance research in multilingual and multi-layout document understanding.
📝 Abstract
Multimodal Large Language Models (MLLMs) have seen rapid advances in recent years and are now being applied to visual document understanding tasks. They are expected to process a wide range of document images across languages, including Japanese. Understanding documents from images requires models to read what are written in them. Since some Japanese documents are written vertically, support for vertical writing is essential. However, research specifically focused on vertically written Japanese text remains limited. In this study, we evaluate the reading capability of existing MLLMs on vertically written Japanese text. First, we generate a synthetic Japanese OCR dataset by rendering Japanese texts into images, and use it for both model fine-tuning and evaluation. This dataset includes Japanese text in both horizontal and vertical writing. We also create an evaluation dataset sourced from the real-world document images containing vertically written Japanese text. Using these datasets, we demonstrate that the existing MLLMs perform worse on vertically written Japanese text than on horizontally written Japanese text. Furthermore, we show that training MLLMs on our synthesized Japanese OCR dataset results in improving the performance of models that previously could not handle vertical writing. The datasets and code are publicly available https://github.com/llm-jp/eval_vertical_ja.