🤖 AI Summary
Detecting large language models (LLMs) masquerading as human annotators and submitting spurious feedback in crowdsourcing—particularly when ground-truth labels are unavailable—remains a critical challenge. Method: We propose a training-free, label-agnostic peer prediction scoring method that models the conditional dependence between worker responses and LLM-generated labels under a “LLM-assisted cheating” assumption to identify low-effort cheating behavior. Contribution/Results: This work is the first to adapt peer prediction to LLM cheating detection; we provide theoretical guarantees of correctness and derive verifiable identifiability conditions. Unlike prior approaches reliant on high-dimensional text generation, our method supports low-dimensional annotation tasks (e.g., multiple-choice). Empirical evaluation on real-world crowdsourcing datasets demonstrates robust cheating detection and significant performance gains over state-of-the-art baselines.
📝 Abstract
The recent success of generative AI highlights the crucial role of high-quality human feedback in building trustworthy AI systems. However, the increasing use of large language models (LLMs) by crowdsourcing workers poses a significant challenge: datasets intended to reflect human input may be compromised by LLM-generated responses. Existing LLM detection approaches often rely on high-dimension training data such as text, making them unsuitable for annotation tasks like multiple-choice labeling. In this work, we investigate the potential of peer prediction -- a mechanism that evaluates the information within workers' responses without using ground truth -- to mitigate LLM-assisted cheating in crowdsourcing with a focus on annotation tasks. Our approach quantifies the correlations between worker answers while conditioning on (a subset of) LLM-generated labels available to the requester. Building on prior research, we propose a training-free scoring mechanism with theoretical guarantees under a crowdsourcing model that accounts for LLM collusion. We establish conditions under which our method is effective and empirically demonstrate its robustness in detecting low-effort cheating on real-world crowdsourcing datasets.