CHUG: Crowdsourced User-Generated HDR Video Quality Dataset

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing HDR video quality assessment datasets predominantly focus on professionally generated content (PGC) and lack modeling of authentic quality degradation mechanisms inherent to user-generated content (UGC) HDR videos deployed on real-world platforms. To address this gap, we introduce the first large-scale UGC-HDR subjective quality dataset, covering videos sourced from YouTube, TikTok, and other major platforms—comprising 856 reference videos and 5,992 distorted variants. Distortions are synthesized via multi-resolution and multi-bitrate transcoding to emulate realistic delivery impairments. Over 210,000 subjective quality ratings were collected via Amazon Mechanical Turk. This dataset is the first to systematically characterize typical UGC-HDR degradation patterns, establishing the inaugural no-reference video quality assessment (VQA) benchmark tailored to authentic UGC scenarios. It significantly enhances model generalizability and assessment accuracy in complex, real-world UGC environments.

Technology Category

Application Category

📝 Abstract
High Dynamic Range (HDR) videos enhance visual experiences with superior brightness, contrast, and color depth. The surge of User-Generated Content (UGC) on platforms like YouTube and TikTok introduces unique challenges for HDR video quality assessment (VQA) due to diverse capture conditions, editing artifacts, and compression distortions. Existing HDR-VQA datasets primarily focus on professionally generated content (PGC), leaving a gap in understanding real-world UGC-HDR degradations. To address this, we introduce CHUG: Crowdsourced User-Generated HDR Video Quality Dataset, the first large-scale subjective study on UGC-HDR quality. CHUG comprises 856 UGC-HDR source videos, transcoded across multiple resolutions and bitrates to simulate real-world scenarios, totaling 5,992 videos. A large-scale study via Amazon Mechanical Turk collected 211,848 perceptual ratings. CHUG provides a benchmark for analyzing UGC-specific distortions in HDR videos. We anticipate CHUG will advance No-Reference (NR) HDR-VQA research by offering a large-scale, diverse, and real-world UGC dataset. The dataset is publicly available at: https://shreshthsaini.github.io/CHUG/.
Problem

Research questions and friction points this paper is trying to address.

Assessing HDR video quality for user-generated content with diverse degradations
Addressing the gap in datasets for real-world UGC-HDR video quality evaluation
Providing benchmark data for No-Reference HDR video quality assessment methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

First large-scale subjective UGC-HDR quality dataset
Transcoded videos across resolutions and bitrates
Public dataset for No-Reference HDR-VQA research
🔎 Similar Papers
No similar papers found.