🤖 AI Summary
This study addresses the challenge of eliciting truthful self-reported scores in unproctored exams while minimizing grading bias and verification costs. The authors propose a class of mechanisms under which honest reporting is a dominant strategy and honest participants are never penalized. Under a no-punishment constraint, they characterize the optimal verification mechanism that incentivizes truthfulness for the first time. In settings with noisy verification, they integrate appropriate scoring rules to construct near-optimal mechanisms and provide theoretical performance guarantees. Combining tools from mechanism design, game theory, and probabilistic analysis, the work delivers an optimal parameterized mechanism under perfect verification and achieves an efficient trade-off between verification cost and accuracy in noisy environments.
📝 Abstract
Suppose you run a home exam, where students should report their own scores but can cheat freely. You can, if needed, call a limited number of students to class and verify their actual performance against their reported score. We consider the class of mechanisms where truthful reporting is a dominant strategy, and truthful agents are never penalized -- even off-equilibrium. How many students do we need to verify, in expectation, if we want to minimize the bias, i.e., the difference between agents'competence and their expected grade? When perfect verification is available, we characterize the best possible tradeoff between these requirements and provide a simple parametrized mechanism that is optimal in the class for any distribution of agents'types. When verification is noisy, the task becomes much more challenging. We show how proper scoring rules can be leveraged in different ways to construct truthful mechanisms with a good (though not necessarily optimal) tradeoff.