๐ค AI Summary
Existing VQA methods often neglect implicit commonsense knowledge in images and lack interpretable reasoning. To address this, we propose the first zero-shot, interpretable multi-agent collaborative framework. Methodologically, we design a tri-agent architecture comprising a Responser, a Seeker, and an Integrator: the Responser leverages a vision-language model (VLM) to generate visual responses; the Seeker employs a large language model (LLM) to actively retrieve and activate commonsense knowledge from a multi-view knowledge base (MVKB); and the Integrator performs cross-modal collaborative reasoning and answer generation. The MVKB enables fine-grained commonsense modeling of visual scenes without requiring any model fine-tuning. Evaluated on multiple VQA benchmarks, our approach significantly outperforms zero-shot baselines while providing transparent, traceable reasoning pathsโachieving both performance gains and strong interpretability.
๐ Abstract
Recently, to comprehensively improve Vision Language Models (VLMs) for Visual Question Answering (VQA), several methods have been proposed to further reinforce the inference capabilities of VLMs to independently tackle VQA tasks rather than some methods that only utilize VLMs as aids to Large Language Models (LLMs). However, these methods ignore the rich common-sense knowledge inside the given VQA image sampled from the real world. Thus, they cannot fully use the powerful VLM for the given VQA question to achieve optimal performance. Attempt to overcome this limitation and inspired by the human top-down reasoning process, i.e., systematically exploring relevant issues to derive a comprehensive answer, this work introduces a novel, explainable multi-agent collaboration framework by leveraging the expansive knowledge of Large Language Models (LLMs) to enhance the capabilities of VLMs themselves. Specifically, our framework comprises three agents, i.e., Responder, Seeker, and Integrator, to collaboratively answer the given VQA question by seeking its relevant issues and generating the final answer in such a top-down reasoning process. The VLM-based Responder agent generates the answer candidates for the question and responds to other relevant issues. The Seeker agent, primarily based on LLM, identifies relevant issues related to the question to inform the Responder agent and constructs a Multi-View Knowledge Base (MVKB) for the given visual scene by leveraging the build-in world knowledge of LLM. The Integrator agent combines knowledge from the Seeker agent and the Responder agent to produce the final VQA answer. Extensive and comprehensive evaluations on diverse VQA datasets with a variety of VLMs demonstrate the superior performance and interpretability of our framework over the baseline method in the zero-shot setting without extra training cost.