Q-Tacit: Image Quality Assessment via Latent Visual Reasoning

πŸ“… 2026-03-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitations of current vision-language model–based image quality assessment (IQA) methods, which overly rely on natural language reasoning and struggle to capture fine-grained visual quality cues. To overcome this, the authors propose Q-Tacit, a novel paradigm that shifts quality reasoning from linguistic space into a latent visual quality space. By injecting structured visual quality priors and calibrating latent reasoning trajectories, Q-Tacit enables accurate quality judgments without explicit text generation. The approach substantially reduces inference token consumption while outperforming existing reasoning-based methods across multiple IQA benchmarks, demonstrating the efficacy and feasibility of non-linguistic, compact representations for image quality evaluation.

Technology Category

Application Category

πŸ“ Abstract
Vision-Language Model (VLM)-based image quality assessment (IQA) has been significantly advanced by incorporating Chain-of-Thought (CoT) reasoning. Recent work has refined image quality reasoning by applying reinforcement learning (RL) and leveraging active visual tools. However, such strategies are typically language-centric, with visual information being treated as static preconditions. Quality-related visual cues often cannot be abstracted into text in extenso due to the gap between discrete textual tokens and quality perception space, which in turn restricts the reasoning effectiveness for visually intensive IQA tasks. In this paper, we revisit this by asking the question, "Is natural language the ideal space for quality reasoning?" and, as a consequence, we propose Q-Tacit, a new paradigm that elicits VLMs to reason beyond natural language in the latent quality space. Our approach follows a synergistic two-stage process: (i) injecting structural visual quality priors into the latent space, and (ii) calibrating latent reasoning trajectories to improve quality assessment ability. Extensive experiments demonstrate that Q-Tacit can effectively perform quality reasoning with significantly fewer tokens than previous reasoning-based methods, while achieving strong overall performance. This paper validates the proposition that language is not the only compact representation suitable for visual quality, opening possibilities for further exploration of effective latent reasoning paradigms for IQA. Source code will be released to support future research.
Problem

Research questions and friction points this paper is trying to address.

image quality assessment
vision-language model
latent reasoning
visual quality cues
natural language limitation
Innovation

Methods, ideas, or system contributions that make the work stand out.

latent reasoning
image quality assessment
vision-language model
quality space
visual priors
πŸ”Ž Similar Papers
No similar papers found.