Seeing Through Words: Controlling Visual Retrieval Quality with Language Models

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of short-text queries in text-to-image retrieval, which often suffer from semantic ambiguity and lack explicit control over image quality, leading to inaccurate or user-misaligned results. The authors propose a novel quality-controllable retrieval paradigm that, for the first time, integrates both image relevance and aesthetic quality into the query expansion process of a generative language model. This approach transforms concise queries into descriptive texts enriched with fine-grained visual attributes and explicit quality preferences. Notably, it achieves flexible, transparent, and user-preference-driven high-quality image retrieval without modifying pre-trained vision-language models. Extensive experiments demonstrate that the method significantly improves retrieval performance across multiple benchmarks, exhibiting both strong effectiveness and broad generalizability.

Technology Category

Application Category

📝 Abstract
Text-to-image retrieval is a fundamental task in vision-language learning, yet in real-world scenarios it is often challenged by short and underspecified user queries. Such queries are typically only one or two words long, rendering them semantically ambiguous, prone to collisions across diverse visual interpretations, and lacking explicit control over the quality of retrieved images. To address these issues, we propose a new paradigm of quality-controllable retrieval, which enriches short queries with contextual details while incorporating explicit notions of image quality. Our key idea is to leverage a generative language model as a query completion function, extending underspecified queries into descriptive forms that capture fine-grained visual attributes such as pose, scene, and aesthetics. We introduce a general framework that conditions query completion on discretized quality levels, derived from relevance and aesthetic scoring models, so that query enrichment is not only semantically meaningful but also quality-aware. The resulting system provides three key advantages: 1) flexibility, it is compatible with any pretrained vision-language model (VLMs) without modification; 2) transparency, enriched queries are explicitly interpretable by users; and 3) controllability, enabling retrieval results to be steered toward user-preferred quality levels. Extensive experiments demonstrate that our proposed approach significantly improves retrieval results and provides effective quality control, bridging the gap between the expressive capacity of modern VLMs and the underspecified nature of short user queries. Our code is available at https://github.com/Jianglin954/QCQC.
Problem

Research questions and friction points this paper is trying to address.

text-to-image retrieval
short queries
semantic ambiguity
image quality control
vision-language learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

quality-controllable retrieval
query completion
vision-language models
aesthetic-aware retrieval
short query enrichment
🔎 Similar Papers
No similar papers found.