🤖 AI Summary
In cross-platform subjective video quality assessment, participant unreliability—such as instruction neglect, reward abuse, video metadata tampering, and remote desktop usage—severely compromises data integrity. Method: This paper introduces the first systematic identification and modeling of remote desktop abuse and metadata manipulation, proposing a hybrid objective-subjective framework for anomalous participation detection. It integrates behavioral analytics, screen-resolution fingerprinting, network latency features, and subjective rating consistency checks. Contribution/Results: Evaluated on two major crowdsourcing platforms under real-world conditions, the framework significantly reduces anomalous participation rates and markedly improves the stability and trustworthiness of video quality assessments. The approach establishes a reproducible, deployable, and robust paradigm for mitigating adversarial interference in crowdsourced subjective evaluation.
📝 Abstract
Subjective video quality assessment (VQA) is the gold standard for measuring end-user experience across communication, streaming, and UGC pipelines. Beyond high-validity lab studies, crowdsourcing offers accurate, reliable, faster, and cheaper evaluation-but suffers from unreliable submissions by workers who ignore instructions or game rewards. Recent tests reveal sophisticated exploits of video metadata and rising use of remote-desktop (RD) connections, both of which bias results. We propose objective and subjective detectors for RD users and compare two mainstream crowdsourcing platforms on their susceptibility and mitigation under realistic test conditions and task designs.