🤖 AI Summary
Super-resolution (SR), an ill-posed inverse problem, suffers from structural artifacts in regression-based methods, while diffusion models generate diverse yet unverifiable solutions due to the lack of reliable selection mechanisms. To address this, we propose a vision-language model (VLM)-driven trustworthy SR sample selection framework. Our method employs BLIP-2 and GPT-4o for structured semantic querying, and integrates CLIP-based semantic similarity, edge-structure fidelity, and multi-level wavelet-based artifact detection to formulate a novel Trustworthy Weighted Score (TWS). TWS is the first automated metric empirically validated to align closely with human preference. Experiments demonstrate that TWS significantly outperforms conventional metrics—including PSNR and LPIPS—on both natural and motion-blurred images, effectively enhancing semantic correctness, perceptual quality, and overall trustworthiness of SR outputs.
📝 Abstract
Super-resolution (SR) is an ill-posed inverse problem with many feasible solutions consistent with a given low-resolution image. On one hand, regressive SR models aim to balance fidelity and perceptual quality to yield a single solution, but this trade-off often introduces artifacts that create ambiguity in information-critical applications such as recognizing digits or letters. On the other hand, diffusion models generate a diverse set of SR images, but selecting the most trustworthy solution from this set remains a challenge. This paper introduces a robust, automated framework for identifying the most trustworthy SR sample from a diffusion-generated set by leveraging the semantic reasoning capabilities of vision-language models (VLMs). Specifically, VLMs such as BLIP-2, GPT-4o, and their variants are prompted with structured queries to assess semantic correctness, visual quality, and artifact presence. The top-ranked SR candidates are then ensembled to yield a single trustworthy output in a cost-effective manner. To rigorously assess the validity of VLM-selected samples, we propose a novel Trustworthiness Score (TWS) a hybrid metric that quantifies SR reliability based on three complementary components: semantic similarity via CLIP embeddings, structural integrity using SSIM on edge maps, and artifact sensitivity through multi-level wavelet decomposition. We empirically show that TWS correlates strongly with human preference in both ambiguous and natural images, and that VLM-guided selections consistently yield high TWS values. Compared to conventional metrics like PSNR, LPIPS, which fail to reflect information fidelity, our approach offers a principled, scalable, and generalizable solution for navigating the uncertainty of the diffusion SR space. By aligning outputs with human expectations and semantic correctness, this work sets a new benchmark for trustworthiness in generative SR.