🤖 AI Summary
Low-resource language (Vietnamese) scene-text visual question answering (VQA) suffers from scarce annotated data and specialized models. Method: We propose ViSignVQA—the first real-world Vietnamese signage VQA benchmark, comprising 10,762 images and 25,573 QA pairs, featuring bilingual text, informal expressions, and complex color layouts. We introduce the first OCR-augmented multi-agent framework integrating SwinTextSpotter (Vietnamese OCR), ViT5, BLIP-2, and LaTr, with perception–reasoning–GPT-4 collaboration and majority voting for multimodal fusion. Results: OCR text injection boosts F1 by 209%; our system achieves 75.98% overall accuracy. This work delivers the first open-source Vietnamese signage VQA dataset, evaluation benchmark, and tailored multimodal framework—establishing a new paradigm for low-resource language multimodal understanding.
📝 Abstract
Understanding signboard text in natural scenes is essential for real-world applications of Visual Question Answering (VQA), yet remains underexplored, particularly in low-resource languages. We introduce ViSignVQA, the first large-scale Vietnamese dataset designed for signboard-oriented VQA, which comprises 10,762 images and 25,573 question-answer pairs. The dataset captures the diverse linguistic, cultural, and visual characteristics of Vietnamese signboards, including bilingual text, informal phrasing, and visual elements such as color and layout. To benchmark this task, we adapted state-of-the-art VQA models (e.g., BLIP-2, LaTr, PreSTU, and SaL) by integrating a Vietnamese OCR model (SwinTextSpotter) and a Vietnamese pretrained language model (ViT5). The experimental results highlight the significant role of the OCR-enhanced context, with F1-score improvements of up to 209% when the OCR text is appended to questions. Additionally, we propose a multi-agent VQA framework combining perception and reasoning agents with GPT-4, achieving 75.98% accuracy via majority voting. Our study presents the first large-scale multimodal dataset for Vietnamese signboard understanding. This underscores the importance of domain-specific resources in enhancing text-based VQA for low-resource languages. ViSignVQA serves as a benchmark capturing real-world scene text characteristics and supporting the development and evaluation of OCR-integrated VQA models in Vietnamese.