HDRSDR-VQA: A Subjective Video Quality Dataset for HDR and SDR Comparative Evaluation

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the fundamental challenge of cross-dynamic-range visual quality incomparability between HDR and SDR videos. To this end, we construct the first subjective quality assessment dataset specifically designed for direct HDR/SDR comparison: it comprises 54 source sequences, 960 videos (half HDR, half SDR), and nine distortion levels, derived from large-scale paired-comparison experiments involving 145 participants across six consumer-grade HDR TVs. For the first time, we achieve frame-level preference judgments under identical content and distortion conditions, incorporating real-world display devices and viewing environments to generate Just-Noticeable-Difference (JND)-calibrated quality scores. Through multi-device consistency calibration and crowdsourced score aggregation, we produce over 22,000 reliable pairwise comparisons. A subset of the dataset has been publicly released, significantly advancing cross-format perceptual modeling, adaptive streaming strategy optimization, and no-reference video quality assessment algorithm development.

Technology Category

Application Category

📝 Abstract
We introduce HDRSDR-VQA, a large-scale video quality assessment dataset designed to facilitate comparative analysis between High Dynamic Range (HDR) and Standard Dynamic Range (SDR) content under realistic viewing conditions. The dataset comprises 960 videos generated from 54 diverse source sequences, each presented in both HDR and SDR formats across nine distortion levels. To obtain reliable perceptual quality scores, we conducted a comprehensive subjective study involving 145 participants and six consumer-grade HDR-capable televisions. A total of over 22,000 pairwise comparisons were collected and scaled into Just-Objectionable-Difference (JOD) scores. Unlike prior datasets that focus on a single dynamic range format or use limited evaluation protocols, HDRSDR-VQA enables direct content-level comparison between HDR and SDR versions, supporting detailed investigations into when and why one format is preferred over the other. The open-sourced part of the dataset is publicly available to support further research in video quality assessment, content-adaptive streaming, and perceptual model development.
Problem

Research questions and friction points this paper is trying to address.

Comparative video quality analysis between HDR and SDR content
Subjective quality assessment under realistic viewing conditions
Dataset supports investigation of HDR vs SDR format preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale HDR and SDR video dataset
Subjective study with 145 participants
Pairwise comparisons scaled to JOD scores
🔎 Similar Papers
No similar papers found.