Ensuring Reliable Participation in Subjective Video Quality Tests Across Platforms

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In cross-platform subjective video quality assessment, participant unreliability—such as instruction neglect, reward abuse, video metadata tampering, and remote desktop usage—severely compromises data integrity. Method: This paper introduces the first systematic identification and modeling of remote desktop abuse and metadata manipulation, proposing a hybrid objective-subjective framework for anomalous participation detection. It integrates behavioral analytics, screen-resolution fingerprinting, network latency features, and subjective rating consistency checks. Contribution/Results: Evaluated on two major crowdsourcing platforms under real-world conditions, the framework significantly reduces anomalous participation rates and markedly improves the stability and trustworthiness of video quality assessments. The approach establishes a reproducible, deployable, and robust paradigm for mitigating adversarial interference in crowdsourced subjective evaluation.

Technology Category

Application Category

📝 Abstract
Subjective video quality assessment (VQA) is the gold standard for measuring end-user experience across communication, streaming, and UGC pipelines. Beyond high-validity lab studies, crowdsourcing offers accurate, reliable, faster, and cheaper evaluation-but suffers from unreliable submissions by workers who ignore instructions or game rewards. Recent tests reveal sophisticated exploits of video metadata and rising use of remote-desktop (RD) connections, both of which bias results. We propose objective and subjective detectors for RD users and compare two mainstream crowdsourcing platforms on their susceptibility and mitigation under realistic test conditions and task designs.
Problem

Research questions and friction points this paper is trying to address.

Detecting unreliable worker submissions in crowdsourced video quality tests
Addressing exploitation of video metadata and remote-desktop connections
Comparing platform susceptibility to biased results in realistic conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Objective and subjective detectors for remote-desktop users
Comparing two mainstream crowdsourcing platforms' susceptibility
Evaluating mitigation under realistic test conditions
🔎 Similar Papers
No similar papers found.