Leveraging Vision-Language Models to Select Trustworthy Super-Resolution Samples Generated by Diffusion Models

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Super-resolution (SR), an ill-posed inverse problem, suffers from structural artifacts in regression-based methods, while diffusion models generate diverse yet unverifiable solutions due to the lack of reliable selection mechanisms. To address this, we propose a vision-language model (VLM)-driven trustworthy SR sample selection framework. Our method employs BLIP-2 and GPT-4o for structured semantic querying, and integrates CLIP-based semantic similarity, edge-structure fidelity, and multi-level wavelet-based artifact detection to formulate a novel Trustworthy Weighted Score (TWS). TWS is the first automated metric empirically validated to align closely with human preference. Experiments demonstrate that TWS significantly outperforms conventional metrics—including PSNR and LPIPS—on both natural and motion-blurred images, effectively enhancing semantic correctness, perceptual quality, and overall trustworthiness of SR outputs.

Technology Category

Application Category

📝 Abstract
Super-resolution (SR) is an ill-posed inverse problem with many feasible solutions consistent with a given low-resolution image. On one hand, regressive SR models aim to balance fidelity and perceptual quality to yield a single solution, but this trade-off often introduces artifacts that create ambiguity in information-critical applications such as recognizing digits or letters. On the other hand, diffusion models generate a diverse set of SR images, but selecting the most trustworthy solution from this set remains a challenge. This paper introduces a robust, automated framework for identifying the most trustworthy SR sample from a diffusion-generated set by leveraging the semantic reasoning capabilities of vision-language models (VLMs). Specifically, VLMs such as BLIP-2, GPT-4o, and their variants are prompted with structured queries to assess semantic correctness, visual quality, and artifact presence. The top-ranked SR candidates are then ensembled to yield a single trustworthy output in a cost-effective manner. To rigorously assess the validity of VLM-selected samples, we propose a novel Trustworthiness Score (TWS) a hybrid metric that quantifies SR reliability based on three complementary components: semantic similarity via CLIP embeddings, structural integrity using SSIM on edge maps, and artifact sensitivity through multi-level wavelet decomposition. We empirically show that TWS correlates strongly with human preference in both ambiguous and natural images, and that VLM-guided selections consistently yield high TWS values. Compared to conventional metrics like PSNR, LPIPS, which fail to reflect information fidelity, our approach offers a principled, scalable, and generalizable solution for navigating the uncertainty of the diffusion SR space. By aligning outputs with human expectations and semantic correctness, this work sets a new benchmark for trustworthiness in generative SR.
Problem

Research questions and friction points this paper is trying to address.

Select trustworthy super-resolution samples from diffusion models
Balance fidelity and perceptual quality in super-resolution
Assess semantic correctness and visual quality automatically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging VLMs to select trustworthy SR samples
Proposing Trustworthiness Score (TWS) for SR reliability
Ensembling top-ranked SR candidates cost-effectively
🔎 Similar Papers
No similar papers found.
Cansu Korkmaz
Cansu Korkmaz
PhD, Koc University
super resolutioncomputer visiondeep learningimage restoration
A
Ahmet Murat Tekalp
Department of Electrical and Electronics Engineering and KUIS AI Center, Koc University, 34450 Istanbul, Turkey
Zafer Dogan
Zafer Dogan
Koç University
Signal ProcessingImage ProcessingInverse ProblemsMachine Learning