🤖 AI Summary
Visual Question Answering (VQA) faces persistent challenges including complex reasoning, text-image joint modeling, dataset bias, limited interpretability, and insufficient commonsense knowledge. Method: This work systematically surveys VQA advancements since 2015, integrating model architectures—from joint embeddings and attention mechanisms to Transformers and multimodal large language models—alongside evolving benchmark datasets and applications. It constructs the first comprehensive evolutionary map spanning all developmental stages. Contribution/Results: The study identifies two key forward-looking directions: (1) external knowledge augmentation to enhance reasoning robustness, and (2) interpretable modeling to mitigate bias and black-box limitations. It clarifies current bottlenecks—e.g., domain-specific bias and weak long-range compositional reasoning—and establishes a paradigm shift from supervised fine-tuning toward knowledge injection and trustworthy multimodal reasoning. This work provides both a theoretical framework and practical roadmap for advancing trustworthy multimodal AI.
📝 Abstract
Visual Question Answering (VQA) is an interdisciplinary field that bridges the gap between computer vision (CV) and natural language processing(NLP), enabling Artificial Intelligence(AI) systems to answer questions about images. Since its inception in 2015, VQA has rapidly evolved, driven by advances in deep learning, attention mechanisms, and transformer-based models. This survey traces the journey of VQA from its early days, through major breakthroughs, such as attention mechanisms, compositional reasoning, and the rise of vision-language pre-training methods. We highlight key models, datasets, and techniques that shaped the development of VQA systems, emphasizing the pivotal role of transformer architectures and multimodal pre-training in driving recent progress. Additionally, we explore specialized applications of VQA in domains like healthcare and discuss ongoing challenges, such as dataset bias, model interpretability, and the need for common-sense reasoning. Lastly, we discuss the emerging trends in large multimodal language models and the integration of external knowledge, offering insights into the future directions of VQA. This paper aims to provide a comprehensive overview of the evolution of VQA, highlighting both its current state and potential advancements.