🤖 AI Summary
Multimodal large language models (MLLMs) suffer from pervasive hallucination and insufficient self-awareness when performing low-level visual perception and understanding (HLPU) tasks—e.g., image quality assessment. This work formally defines HLPU hallucination for the first time and constructs HLPU, the first large-scale, instruction-tuned dataset comprising 200K image–instruction pairs. Method: We propose SAFEQA, a novel MLLM architecture that jointly encodes raw images, salient regions, and quality-related features; further, we introduce ESA-PO, a boundary-aware self-awareness-enhanced preference optimization framework. Additionally, we design a task-specific evaluation protocol tailored to low-level vision. Results: Experiments demonstrate that SAFEQA significantly reduces hallucination rates, improves model self-awareness, and achieves superior assessment accuracy—outperforming leading closed-source models across multiple quantitative metrics.
📝 Abstract
The rapid development of multimodal large language models has resulted in remarkable advancements in visual perception and understanding, consolidating several tasks into a single visual question-answering framework. However, these models are prone to hallucinations, which limit their reliability as artificial intelligence systems. While this issue is extensively researched in natural language processing and image captioning, there remains a lack of investigation of hallucinations in Low-level Visual Perception and Understanding (HLPU), especially in the context of image quality assessment tasks. We consider that these hallucinations arise from an absence of clear self-awareness within the models. To address this issue, we first introduce the HLPU instruction database, the first instruction database specifically focused on hallucinations in low-level vision tasks. This database contains approximately 200K question-answer pairs and comprises four subsets, each covering different types of instructions. Subsequently, we propose the Self-Awareness Failure Elimination (SAFEQA) model, which utilizes image features, salient region features and quality features to improve the perception and comprehension abilities of the model in low-level vision tasks. Furthermore, we propose the Enhancing Self-Awareness Preference Optimization (ESA-PO) framework to increase the model's awareness of knowledge boundaries, thereby mitigating the incidence of hallucination. Finally, we conduct comprehensive experiments on low-level vision tasks, with the results demonstrating that our proposed method significantly enhances self-awareness of the model in these tasks and reduces hallucinations. Notably, our proposed method improves both accuracy and self-awareness of the proposed model and outperforms close-source models in terms of various evaluation metrics.