WAON: Large-Scale and High-Quality Japanese Image-Text Pair Dataset for Vision-Language Models

📅 2025-10-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Japanese vision-language models suffer from a scarcity of high-quality, large-scale image-text pairs. To address this, we introduce WAON—a rigorously filtered and deduplicated Japanese dataset comprising 155 million high-quality image-text pairs—alongside WAON-Bench, the first manually curated benchmark for Japanese cultural image classification. Data are sourced from Common Crawl and processed via multi-stage cleaning and quality control. We employ SigLIP2 for multilingual fine-tuning and evaluation. Experiments demonstrate that models trained on WAON achieve state-of-the-art performance across multiple Japanese cultural understanding benchmarks, outperforming ReLAION by +3.2% in accuracy and accelerating convergence by 1.8×. This work systematically bridges a critical gap in foundational Japanese multimodal resources.

Technology Category

Application Category

📝 Abstract
Large-scale and high-quality image-text pair datasets play an important role in developing high-performing Vision-Language Models (VLMs). In this work, we introduce WAON, a large-scale and high-quality Japanese image-text pair dataset containing approximately 155 million examples, collected from Common Crawl. Our dataset construction pipeline employs various techniques, including filtering and deduplication, which have been shown to be effective in previous studies. To evaluate its effectiveness, we also construct WAON-Bench, a manually curated benchmark for Japanese cultural image classification, consisting of 374 classes. To assess the effectiveness of our dataset, we conduct experiments using both WAON and the Japanese subset of ReLAION, one of the most widely used vision-language datasets. We fine-tune SigLIP2, a strong multilingual model, on both datasets. The results demonstrate that WAON enhances model performance on WAON-Bench more efficiently than ReLAION and achieves higher accuracy across all evaluated benchmarks. Furthermore, the model fine-tuned on WAON achieves state-of-the-art performance on several Japanese cultural benchmarks. We release our dataset, model, and code at https://speed1313.github.io/WAON.
Problem

Research questions and friction points this paper is trying to address.

Creating a large-scale Japanese image-text dataset for vision-language models
Developing effective data filtering and deduplication methods for dataset quality
Improving Japanese cultural image classification through specialized benchmark evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructed large-scale Japanese image-text dataset WAON
Applied filtering and deduplication techniques for quality
Fine-tuned multilingual SigLIP2 model for cultural benchmarks
🔎 Similar Papers
No similar papers found.