Augmented Vision-Language Models: A Systematic Review

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) exhibit strong perceptual capabilities but suffer from inherent limitations—including weak logical reasoning, inability to update knowledge without costly retraining, and poor interpretability. Method: This paper systematically surveys neural-symbolic integration approaches and proposes a fine-tuning-free collaborative reasoning framework: a pre-trained VLM serves as the neural frontend, tightly coupled with symbolic components—such as knowledge graphs and formal rule engines—to enable plug-and-play external knowledge injection and transparent, multi-step reasoning. Contribution/Results: We introduce the first taxonomy of neural-symbolic methods specifically designed for VLM enhancement, precisely delineating the applicability boundaries and bottlenecks of each technique in multimodal understanding. The framework provides a systematic solution to improve model interpretability, dynamic knowledge integration, and structured reasoning performance, advancing the state of explainable and adaptable multimodal AI.

Technology Category

Application Category

📝 Abstract
Recent advances in visual-language machine learning models have demonstrated exceptional ability to use natural language and understand visual scenes by training on large, unstructured datasets. However, this training paradigm cannot produce interpretable explanations for its outputs, requires retraining to integrate new information, is highly resource-intensive, and struggles with certain forms of logical reasoning. One promising solution involves integrating neural networks with external symbolic information systems, forming neural symbolic systems that can enhance reasoning and memory abilities. These neural symbolic systems provide more interpretable explanations to their outputs and the capacity to assimilate new information without extensive retraining. Utilizing powerful pre-trained Vision-Language Models (VLMs) as the core neural component, augmented by external systems, offers a pragmatic approach to realizing the benefits of neural-symbolic integration. This systematic literature review aims to categorize techniques through which visual-language understanding can be improved by interacting with external symbolic information systems.
Problem

Research questions and friction points this paper is trying to address.

Enhance interpretability of vision-language model outputs
Reduce retraining needs for new information integration
Improve logical reasoning in vision-language tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrate neural networks with symbolic systems
Use pre-trained VLMs as neural core
Enhance reasoning with external symbolic data
🔎 Similar Papers
No similar papers found.