Jagle: Building a Large-Scale Japanese Multimodal Post-Training Dataset for Vision-Language Models

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the scarcity of large-scale, multi-domain Japanese visual question answering (VQA) datasets, which has hindered the development of high-quality Japanese vision-language models. To overcome this limitation, we propose Jagle, the first large-scale Japanese multimodal post-training dataset—comprising approximately 9.2 million samples—automatically constructed from heterogeneous sources including images, image-text pairs, and PDF documents without relying on existing VQA data. Our approach integrates vision-language model–generated question answering, cross-lingual translation, and document text rendering techniques. A 2.2B-parameter vision-language model trained on Jagle outperforms InternVL3.5-2B on average across ten Japanese evaluation benchmarks and approaches the performance of Qwen3-VL-2B-Instruct. Moreover, joint training with FineVision further enhances performance on English tasks, offering a new paradigm for non-English vision-language model data construction.
📝 Abstract
Developing vision-language models (VLMs) that generalize across diverse tasks requires large-scale training datasets with diverse content. In English, such datasets are typically constructed by aggregating and curating numerous existing visual question answering (VQA) resources. However, this strategy does not readily extend to other languages, where VQA datasets remain limited in both scale and domain coverage, posing a major obstacle to building high-quality multilingual and non-English VLMs. In this work, we introduce Jagle, the largest Japanese multimodal post-training dataset to date, comprising approximately 9.2 million instances across diverse tasks. Rather than relying on existing VQA datasets, we collect heterogeneous source data, including images, image-text pairs, and PDF documents, and generate VQA pairs through multiple strategies such as VLM-based QA generation, translation, and text rendering. Experiments demonstrate that a 2.2B model trained with Jagle achieves strong performance on Japanese tasks, surpassing InternVL3.5-2B in average score across ten Japanese evaluation tasks and approaching within five points of Qwen3-VL-2B-Instruct. Furthermore, combining Jagle with FineVision does not degrade English performance; instead, it improves English performance compared to training with FineVision alone. To facilitate reproducibility and future research, we release the dataset, trained models, and code.
Problem

Research questions and friction points this paper is trying to address.

vision-language models
multilingual VLMs
VQA datasets
Japanese multimodal data
data scarcity
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal dataset
VQA generation
Japanese VLM
cross-lingual transfer
post-training
🔎 Similar Papers
No similar papers found.