🤖 AI Summary
Existing image quality databases predominantly focus on digital distortions, neglecting naturally occurring affine transformations (rotation, translation, scaling) and Gaussian noise prevalent in real-world scenes. Method: We introduce the first large-scale subjective image quality database, systematically collecting over 20,000 quadruple-comparison responses from 105 observers on 864 images degraded by affine + Gaussian distortions. Perceptual scales are derived rigorously using Maximum Likelihood Difference Scaling (MLDS). Contribution/Results: We empirically validate Piéron’s law for suprathreshold affine distortions—revealing significantly elevated detection thresholds compared to classical models. Our database achieves superior performance on the Group-MAD benchmark versus mainstream alternatives. It fills a critical gap in natural distortion modeling and provides a high-quality, reproducible resource for training image quality assessment models and advancing research into visual perception mechanisms.
📝 Abstract
Image quality databases are used to train models for predicting subjective human perception. However, most existing databases focus on distortions commonly found in digital media and not in natural conditions. Affine transformations are particularly relevant to study, as they are among the most commonly encountered by human observers in everyday life. This Data Descriptor presents a set of human responses to suprathreshold affine image transforms (rotation, translation, scaling) and Gaussian noise as convenient reference to compare with previously existing image quality databases. The responses were measured using well established psychophysics: the Maximum Likelihood Difference Scaling method. The set contains responses to 864 distorted images. The experiments involved 105 observers and more than 20000 comparisons of quadruples of images. The quality of the dataset is ensured because (a) it reproduces the classical Pi'eron's law, (b) it reproduces classical absolute detection thresholds, and (c) it is consistent with conventional image quality databases but improves them according to Group-MAD experiments.