🤖 AI Summary
When individuals incur cognitive costs in evaluating alternatives, their choices may fail to reflect true preferences, causing distortions in conventional voting mechanisms.
Method: This paper pioneers a robust mechanism design framework for preference elicitation under learning costs, integrating insights from information economics and social choice theory to rigorously characterize the identifiable bounds of preference statistics under bounded rationality.
Contribution: We propose the first theoretically guaranteed, informationally non-invasive voting correction mechanism. We establish its implementability and robustness within a Bayesian framework, proving that it recovers consistent preference aggregates despite costly evaluation. Empirical evaluations demonstrate that the mechanism significantly improves alignment between voting outcomes and the underlying population’s true preferences. By formally accounting for cognitive constraints, our approach introduces a novel paradigm for social choice in high-cost evaluation settings—bridging theoretical rigor with practical applicability in behavioral and institutional design contexts.
📝 Abstract
If people find it costly to evaluate the options available to them, their choices may not directly reveal their preferences. Yet, it is conceivable that a researcher can still learn about a population's preferences with careful experiment design. We formalize the researcher's problem in a model of robust mechanism design where it is costly for individuals to learn about how much they value a product. We characterize the statistics that the researcher can identify, and find that they are quite restricted. Finally, we apply our positive results to social choice and propose a way to combat uninformed voting.