Towards Signboard-Oriented Visual Question Answering: ViSignVQA Dataset, Method and Benchmark

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Low-resource language (Vietnamese) scene-text visual question answering (VQA) suffers from scarce annotated data and specialized models. Method: We propose ViSignVQA—the first real-world Vietnamese signage VQA benchmark, comprising 10,762 images and 25,573 QA pairs, featuring bilingual text, informal expressions, and complex color layouts. We introduce the first OCR-augmented multi-agent framework integrating SwinTextSpotter (Vietnamese OCR), ViT5, BLIP-2, and LaTr, with perception–reasoning–GPT-4 collaboration and majority voting for multimodal fusion. Results: OCR text injection boosts F1 by 209%; our system achieves 75.98% overall accuracy. This work delivers the first open-source Vietnamese signage VQA dataset, evaluation benchmark, and tailored multimodal framework—establishing a new paradigm for low-resource language multimodal understanding.

Technology Category

Application Category

📝 Abstract
Understanding signboard text in natural scenes is essential for real-world applications of Visual Question Answering (VQA), yet remains underexplored, particularly in low-resource languages. We introduce ViSignVQA, the first large-scale Vietnamese dataset designed for signboard-oriented VQA, which comprises 10,762 images and 25,573 question-answer pairs. The dataset captures the diverse linguistic, cultural, and visual characteristics of Vietnamese signboards, including bilingual text, informal phrasing, and visual elements such as color and layout. To benchmark this task, we adapted state-of-the-art VQA models (e.g., BLIP-2, LaTr, PreSTU, and SaL) by integrating a Vietnamese OCR model (SwinTextSpotter) and a Vietnamese pretrained language model (ViT5). The experimental results highlight the significant role of the OCR-enhanced context, with F1-score improvements of up to 209% when the OCR text is appended to questions. Additionally, we propose a multi-agent VQA framework combining perception and reasoning agents with GPT-4, achieving 75.98% accuracy via majority voting. Our study presents the first large-scale multimodal dataset for Vietnamese signboard understanding. This underscores the importance of domain-specific resources in enhancing text-based VQA for low-resource languages. ViSignVQA serves as a benchmark capturing real-world scene text characteristics and supporting the development and evaluation of OCR-integrated VQA models in Vietnamese.
Problem

Research questions and friction points this paper is trying to address.

Develops a Vietnamese signboard-oriented VQA dataset with diverse linguistic and visual features
Integrates OCR and language models to enhance VQA performance for low-resource languages
Proposes a multi-agent framework combining perception and reasoning for improved accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapted VQA models with Vietnamese OCR and language models
Proposed multi-agent framework combining perception and reasoning agents
Created large-scale Vietnamese dataset for signboard-oriented VQA
🔎 Similar Papers
No similar papers found.
H
Hieu Minh Nguyen
University of Information Technology, Ho Chi Minh City, Vietnam. Vietnam National University, Ho Chi Minh City, Vietnam.
T
Tam Le-Thanh Dang
University of Information Technology, Ho Chi Minh City, Vietnam. Vietnam National University, Ho Chi Minh City, Vietnam.
Kiet Van Nguyen
Kiet Van Nguyen
University of Information Technology, VNU-HCM
Data ScienceArtificial IntelligenceComputational Linguistics