LOVA3: Learning to Visual Question Answering, Asking and Assessment

📅 2024-05-23
🏛️ Neural Information Processing Systems
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) are largely confined to visual question answering (VQA), lacking autonomous question generation and question evaluation capabilities—key limitations hindering human-like multimodal understanding and self-directed learning. Method: We propose LOVA3, the first unified framework integrating VQA, generative question answering (GenQA), and evaluative question answering (EvalQA) within a single MLLM. Our approach introduces a dual-task collaborative training paradigm and establishes EvalQABench—the first large-scale visual question evaluation benchmark (69K samples). Leveraging instruction tuning, synthetic data augmentation, and multi-task joint optimization, we enhance model performance across diverse VQA and reasoning benchmarks. Contribution/Results: Experiments demonstrate that closing the triadic capability loop—VQA, GenQA, and EvalQA—significantly improves model depth of understanding, cross-task generalization, and self-reflective reasoning, establishing a novel paradigm for human-inspired multimodal learning.

Technology Category

Application Category

📝 Abstract
Question answering, asking, and assessment are three innate human traits crucial for understanding the world and acquiring knowledge. By enhancing these capabilities, humans can more effectively utilize data, leading to better comprehension and learning outcomes. Current Multimodal Large Language Models (MLLMs) primarily focus on question answering, often neglecting the full potential of questioning and assessment skills. Inspired by the human learning mechanism, we introduce LOVA3, an innovative framework named"Learning tO Visual question Answering, Asking and Assessment,"designed to equip MLLMs with these additional capabilities. Our approach involves the creation of two supplementary training tasks GenQA and EvalQA, aiming at fostering the skills of asking and assessing questions in the context of images. To develop the questioning ability, we compile a comprehensive set of multimodal foundational tasks. For assessment, we introduce a new benchmark called EvalQABench, comprising 64,000 training samples (split evenly between positive and negative samples) and 5,000 validation and testing samples. We posit that enhancing MLLMs with the capabilities to answer, ask, and assess questions will enhance their multimodal comprehension, ultimately improving overall performance. To validate this hypothesis, we train MLLMs using the LOVA3 framework and evaluate them on a range of multimodal datasets and benchmarks. Our results demonstrate consistent performance gains, underscoring the critical role of these additional tasks in fostering comprehensive intelligence in MLLMs. The code is available at https://github.com/showlab/LOVA3.
Problem

Research questions and friction points this paper is trying to address.

Enhance MLLMs with questioning and assessment skills.
Develop GenQA and EvalQA for multimodal tasks.
Create EvalQABench to improve MLLMs' comprehension.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces GenQA and EvalQA training tasks
Develops multimodal foundational tasks for questioning
Creates EvalQABench benchmark for assessment
🔎 Similar Papers
No similar papers found.