🤖 AI Summary
Current vision-language models (VLMs) suffer from poor generalization, hallucination, and logical inconsistencies in multimodal visual-textual reasoning. To address these limitations, we propose VLAgent—a plan-execution dual-closed-loop neuro-symbolic system. It first leverages in-context learning to prompt a large language model (LLM) to generate interpretable, stepwise reasoning plans; then, a neuro-symbolic execution module dynamically composes and incrementally verifies plan execution. Key contributions include: (1) the Syntax-Semantics Parser (SS-Parser), the first unified parser that automatically detects and rectifies both syntactic and semantic errors in reasoning plans; (2) the Plan Repairer and multi-level Output Verifiers, which jointly enhance robustness against erroneous or incomplete plans; and (3) modular integration enabling strong cross-task generalization. VLAgent achieves state-of-the-art performance on GQA, MME, NLVR2, and VQAv2—outperforming ViperGPT, VisProg, and other baselines—while maintaining high accuracy and full interpretability.
📝 Abstract
The advancement in large language models (LLMs) and large vision models has fueled the rapid progress in multi-modal visual-text reasoning capabilities. However, existing vision-language models (VLMs) to date suffer from generalization performance. Inspired by recent development in LLMs for visual reasoning, this paper presents VLAgent, an AI system that can create a step-by-step visual reasoning plan with an easy-to-understand script and execute each step of the plan in real time by integrating planning script with execution verifications via an automated process supported by VLAgent. In the task planning phase, VLAgent fine-tunes an LLM through in-context learning to generate a step-by-step planner for each user-submitted text-visual reasoning task. During the plan execution phase, VLAgent progressively refines the composition of neuro-symbolic executable modules to generate high-confidence reasoning results. VLAgent has three unique design characteristics: First, we improve the quality of plan generation through in-context learning, improving logic reasoning by reducing erroneous logic steps, incorrect programs, and LLM hallucinations. Second, we design a syntax-semantics parser to identify and correct additional logic errors of the LLM-generated planning script prior to launching the plan executor. Finally, we employ the ensemble method to improve the generalization performance of our step-executor. Extensive experiments with four visual reasoning benchmarks (GQA, MME, NLVR2, VQAv2) show that VLAgent achieves significant performance enhancement for multimodal text-visual reasoning applications, compared to the exiting representative VLMs and LLM based visual composition approaches like ViperGPT and VisProg, thanks to the novel optimization modules of VLAgent back-engine (SS-Parser, Plan Repairer, Output Verifiers). Code and data will be made available upon paper acceptance.