LaV-CoT: Language-Aware Visual CoT with Multi-Aspect Reward Optimization for Real-World Multilingual VQA

📅 2025-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multilingual visual question answering (mVQA) methods rely on text-based chain-of-thought (CoT) reasoning, limiting their capacity for cross-lingual multimodal reasoning and industrial deployment. To address this, we propose Language-aware Visual Chain-of-Thought (LaVi-CoT), the first framework to explicitly incorporate linguistic characteristics into the visual reasoning pipeline, establishing a multi-stage, interpretable paradigm encompassing language identification, spatial object description, and stepwise logical inference. Our method integrates automated multilingual annotation generation, supervised fine-tuning, and language-aware group-wise relative policy optimization, augmented by a triple reward mechanism—language consistency, structural accuracy, and semantic alignment. On multiple multilingual VQA benchmarks, LaVi-CoT outperforms same-scale open-source models by 9.5% and surpasses models with twice the parameter count by 2.6%. Online A/B testing demonstrates significant gains over GPT-4o-0513 and Gemini-2.5-flash.

Technology Category

Application Category

📝 Abstract
As large vision language models (VLMs) advance, their capabilities in multilingual visual question answering (mVQA) have significantly improved. Chain-of-thought (CoT) reasoning has been proven to enhance interpretability and complex reasoning. However, most existing approaches rely primarily on textual CoT and provide limited support for multilingual multimodal reasoning, constraining their deployment in real-world applications. To address this gap, we introduce extbf{LaV-CoT}, the first Language-aware Visual CoT framework with Multi-Aspect Reward Optimization. LaV-CoT incorporates an interpretable multi-stage reasoning pipeline consisting of Text Summary with Bounding Box (BBox), Language Identification, Spatial Object-level Captioning, and Step-by-step Logical Reasoning. Following this reasoning pipeline, we design an automated data curation method that generates multilingual CoT annotations through iterative generation, correction, and refinement, enabling scalable and high-quality training data. To improve reasoning and generalization, LaV-CoT adopts a two-stage training paradigm combining Supervised Fine-Tuning (SFT) with Language-aware Group Relative Policy Optimization (GRPO), guided by verifiable multi-aspect rewards including language consistency, structural accuracy, and semantic alignment. Extensive evaluations on public datasets including MMMB, Multilingual MMBench, and MTVQA show that LaV-CoT achieves up to (sim)9.5% accuracy improvements over open-source baselines of similar size and even surpasses models with 2$ imes$ larger scales by (sim)2.6%. Moreover, LaV-CoT outperforms advanced proprietary models such as GPT-4o-0513 and Gemini-2.5-flash. We further conducted an online A/B test to validate our method on real-world data, highlighting its effectiveness for industrial deployment. Our code is available at this link: href{https://github.com/HJNVR/LaV-CoT}
Problem

Research questions and friction points this paper is trying to address.

Enhancing multilingual visual question answering with visual reasoning
Improving interpretability through automated multilingual CoT annotations
Optimizing multi-aspect rewards for real-world VQA deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language-aware Visual CoT framework
Multi-aspect reward optimization training
Automated multilingual data curation pipeline
🔎 Similar Papers
No similar papers found.
J
Jing Huang
Ant Group, Singapore, Singapore
Z
Zhiya Tan
Nanyang Technological University, Singapore, Singapore
S
Shutao Gong
Ant Group, Chang Sha, China
F
Fanwei Zeng
Ant Group, Hang Zhou, China
Jianshu Li
Jianshu Li
National University of Singapore
computer visionMachine learningFace analysis