No Images, No Problem: Retaining Knowledge in Continual VQA with Questions-Only Memory

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual Question Answering Continual Learning (VQACL) suffers from severe visual-language knowledge forgetting and struggles to balance cross-modal stability and plasticity. Method: We propose a storage-efficient, image-free framework combining textual memory replay with cross-modal attention distillation. It eliminates the need for image storage by retaining only historical questions (“question-only” selective replay) to ensure privacy and memory efficiency. We further introduce cross-modal attention consistency distillation, enforcing alignment of intra- and inter-modal attention distributions between old and new tasks to stabilize vision-language associations. A lightweight continual learning regularization objective is also designed. Results: Our method achieves significant improvements over state-of-the-art approaches on VQAv2 and NExT-QA benchmarks. It effectively mitigates out-of-answer-set drift, enhances long-term cumulative accuracy, and improves knowledge retention—demonstrating superior continual adaptation without visual data replay.

Technology Category

Application Category

📝 Abstract
Continual Learning in Visual Question Answering (VQACL) requires models to learn new visual-linguistic tasks (plasticity) while retaining knowledge from previous tasks (stability). The multimodal nature of VQACL presents unique challenges, requiring models to balance stability across visual and textual domains while maintaining plasticity to adapt to novel objects and reasoning tasks. Existing methods, predominantly designed for unimodal tasks, often struggle to balance these demands effectively. In this work, we introduce QUestion-only replay with Attention Distillation (QUAD), a novel approach for VQACL that leverages only past task questions for regularisation, eliminating the need to store visual data and addressing both memory and privacy concerns. QUAD achieves stability by introducing a question-only replay mechanism that selectively uses questions from previous tasks to prevent overfitting to the current task's answer space, thereby mitigating the out-of-answer-set problem. Complementing this, we propose attention consistency distillation, which uniquely enforces both intra-modal and inter-modal attention consistency across tasks, preserving essential visual-linguistic associations. Extensive experiments on VQAv2 and NExT-QA demonstrate that QUAD significantly outperforms state-of-the-art methods, achieving robust performance in continual VQA.
Problem

Research questions and friction points this paper is trying to address.

Enhances continual learning in Visual Question Answering
Balances visual and textual domain stability
Addresses memory and privacy concerns with question-only replay
Innovation

Methods, ideas, or system contributions that make the work stand out.

Question-only replay mechanism
Attention consistency distillation
Eliminates visual data storage
🔎 Similar Papers
No similar papers found.