🤖 AI Summary
This study addresses how agents balance immediate action against seeking clarifying information under uncertainty. The authors propose a computational model based on expected regret that formally characterizes, for the first time, how contextual uncertainty and action costs jointly influence human decisions to ask clarification questions. Using an experimental paradigm that integrates both linguistic clarification and non-linguistic action choices, the research demonstrates that individuals’ propensity to seek clarification increases with the potential loss associated with erroneous actions, thereby validating the proposed rational trade-off mechanism. These findings reveal a cognitive strategy wherein humans proactively reduce uncertainty in high-stakes situations to avoid significant losses, offering both theoretical and empirical support for understanding metacognitive decision-making in communicative contexts.
📝 Abstract
When deciding how to act under uncertainty, agents may choose to act to reduce uncertainty or they may act despite that uncertainty. In communicative settings, an important way of reducing uncertainty is by asking clarification questions (CQs). We predict that the decision to ask a CQ depends on both contextual uncertainty and the cost of alternative actions, and that these factors interact: uncertainty should matter most when acting incorrectly is costly. We formalize this interaction in a computational model based on expected regret: how much an agent stands to lose by acting now rather than with full information. We test these predictions in two experiments, one examining purely linguistic responses to questions and another extending to choices between clarification and non-linguistic action. Taken together, our results suggest a rational tradeoff: humans tend to seek clarification proportional to the risk of substantial loss when acting under uncertainty.