NLKI: A lightweight Natural Language Knowledge Integration Framework for Improving Small VLMs in Commonsense VQA Tasks

📅 2025-08-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Small vision-language models (sVLMs) suffer from performance degradation in commonsense visual question answering due to missing implicit knowledge in images or questions. To address this, we propose NLKI—a lightweight, end-to-end framework that fine-tunes ColBERTv2 to retrieve natural-language facts and integrates object-aware visual features to construct knowledge-enhanced prompts, guiding large language models (LLMs) to generate interpretable, low-hallucination reasoning chains. Crucially, NLKI replaces conventional structured knowledge bases with the LLM’s implicit commonsense knowledge, enabling efficient, scalable knowledge injection. We jointly fine-tune the framework using noise-robust losses—including symmetric and generalized cross-entropy—to mitigate label noise and retrieval inaccuracies. On CRIC and AOKVQA benchmarks, NLKI boosts accuracy of 250M-parameter sVLMs (e.g., FLAVA) by up to 7%, matching or surpassing Qwen-2 VL-2B. Robust training further yields consistent gains of 2.5–5.5%.

Technology Category

Application Category

📝 Abstract
Commonsense visual-question answering often hinges on knowledge that is missing from the image or the question. Small vision-language models (sVLMs) such as ViLT, VisualBERT and FLAVA therefore lag behind their larger generative counterparts. To study the effect of careful commonsense knowledge integration on sVLMs, we present an end-to-end framework (NLKI) that (i) retrieves natural language facts, (ii) prompts an LLM to craft natural language explanations, and (iii) feeds both signals to sVLMs respectively across two commonsense VQA datasets (CRIC, AOKVQA) and a visual-entailment dataset (e-SNLI-VE). Facts retrieved using a fine-tuned ColBERTv2 and an object information-enriched prompt yield explanations that largely cut down hallucinations, while lifting the end-to-end answer accuracy by up to 7% (across 3 datasets), making FLAVA and other models in NLKI match or exceed medium-sized VLMs such as Qwen-2 VL-2B and SmolVLM-2.5B. As these benchmarks contain 10-25% label noise, additional finetuning using noise-robust losses (such as symmetric cross entropy and generalised cross entropy) adds another 2.5% in CRIC, and 5.5% in AOKVQA. Our findings expose when LLM-based commonsense knowledge beats retrieval from commonsense knowledge bases, how noise-aware training stabilises small models in the context of external knowledge augmentation, and why parameter-efficient commonsense reasoning is now within reach for 250M models.
Problem

Research questions and friction points this paper is trying to address.

Integrating commonsense knowledge into small vision-language models
Reducing hallucinations in commonsense visual question answering
Improving accuracy of small VLMs on noisy VQA benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieves natural language facts using fine-tuned ColBERTv2
Prompts LLM to craft natural language explanations
Uses noise-robust losses for additional finetuning