Towards Top-Down Reasoning: An Explainable Multi-Agent Approach for Visual Question Answering

๐Ÿ“… 2023-11-29
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 3
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing VQA methods often neglect implicit commonsense knowledge in images and lack interpretable reasoning. To address this, we propose the first zero-shot, interpretable multi-agent collaborative framework. Methodologically, we design a tri-agent architecture comprising a Responser, a Seeker, and an Integrator: the Responser leverages a vision-language model (VLM) to generate visual responses; the Seeker employs a large language model (LLM) to actively retrieve and activate commonsense knowledge from a multi-view knowledge base (MVKB); and the Integrator performs cross-modal collaborative reasoning and answer generation. The MVKB enables fine-grained commonsense modeling of visual scenes without requiring any model fine-tuning. Evaluated on multiple VQA benchmarks, our approach significantly outperforms zero-shot baselines while providing transparent, traceable reasoning pathsโ€”achieving both performance gains and strong interpretability.
๐Ÿ“ Abstract
Recently, to comprehensively improve Vision Language Models (VLMs) for Visual Question Answering (VQA), several methods have been proposed to further reinforce the inference capabilities of VLMs to independently tackle VQA tasks rather than some methods that only utilize VLMs as aids to Large Language Models (LLMs). However, these methods ignore the rich common-sense knowledge inside the given VQA image sampled from the real world. Thus, they cannot fully use the powerful VLM for the given VQA question to achieve optimal performance. Attempt to overcome this limitation and inspired by the human top-down reasoning process, i.e., systematically exploring relevant issues to derive a comprehensive answer, this work introduces a novel, explainable multi-agent collaboration framework by leveraging the expansive knowledge of Large Language Models (LLMs) to enhance the capabilities of VLMs themselves. Specifically, our framework comprises three agents, i.e., Responder, Seeker, and Integrator, to collaboratively answer the given VQA question by seeking its relevant issues and generating the final answer in such a top-down reasoning process. The VLM-based Responder agent generates the answer candidates for the question and responds to other relevant issues. The Seeker agent, primarily based on LLM, identifies relevant issues related to the question to inform the Responder agent and constructs a Multi-View Knowledge Base (MVKB) for the given visual scene by leveraging the build-in world knowledge of LLM. The Integrator agent combines knowledge from the Seeker agent and the Responder agent to produce the final VQA answer. Extensive and comprehensive evaluations on diverse VQA datasets with a variety of VLMs demonstrate the superior performance and interpretability of our framework over the baseline method in the zero-shot setting without extra training cost.
Problem

Research questions and friction points this paper is trying to address.

Enhance Visual Question Answering with multi-agent collaboration.
Leverage LLMs to improve VLM performance in VQA tasks.
Introduce top-down reasoning for comprehensive VQA answers.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable multi-agent collaboration framework
Leverages LLMs to enhance VLMs
Top-down reasoning process integration
๐Ÿ”Ž Similar Papers
No similar papers found.
Z
Zeqing Wang
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, Guangdong 510000, China
Wentao Wan
Wentao Wan
Sun Yat-sen University
Artificial IntelligenceCognitive AIDeep LearningNeural-SymbolicQuestion Answering
R
Runmeng Chen
South China Normal University, Guangzhou, Guangdong 510000, China
Q
Qiqing Lao
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, Guangdong 510000, China
M
Minjie Lang
Northeastern University, Shenyang, Liaoning 110000, China
K
Keze Wang
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, Guangdong 510000, China