π€ AI Summary
Vision-language models (VLMs) often lack logical consistency in systematic visual reasoning. To address this, we propose Vision-Language Programs (VLP), a framework that automatically compiles VLM-generated natural language descriptions into executable neural-symbolic programs for structured perception and formal reasoning over images. VLP integrates program synthesis, neural-symbolic computation, and structured prompting to balance perceptual flexibility with logical rigor, enabling human-interpretable reasoning traces and mitigating shortcut learning. Evaluated on both synthetic and real-world benchmarks, VLP significantly outperforms direct prompting and structured prompting baselines, improving accuracy and output consistency on complex visual reasoning tasks. Notably, VLP achieves the first end-to-end generation of image-executable programs from natural language instructions, bridging high-level semantics with grounded, verifiable computation.
π Abstract
Vision-Language models (VLMs) achieve strong performance on multimodal tasks but often fail at systematic visual reasoning tasks, leading to inconsistent or illogical outputs. Neuro-symbolic methods promise to address this by inducing interpretable logical rules, though they exploit rigid, domain-specific perception modules. We propose Vision-Language Programs (VLP), which combine the perceptual flexibility of VLMs with systematic reasoning of program synthesis. Rather than embedding reasoning inside the VLM, VLP leverages the model to produce structured visual descriptions that are compiled into neuro-symbolic programs. The resulting programs execute directly on images, remain consistent with task constraints, and provide human-interpretable explanations that enable easy shortcut mitigation. Experiments on synthetic and real-world datasets demonstrate that VLPs outperform direct and structured prompting, particularly on tasks requiring complex logical reasoning.