🤖 AI Summary
This study investigates whether uncertainty in algorithmic predictions leads human experts to exhibit decision bias due to model disagreement in high-stakes settings. Embedding a randomized field experiment within selective college admissions, the authors randomly presented admissions officers with two algorithmic scores that exhibited similar overall accuracy but diverged in individual-level predictions. This design enables the first empirical quantification of “algorithmic reliance” in a real-world, high-consequence context. Leveraging causal inference methods, the findings reveal that despite substantial discrepancies between the algorithmic predictions, admissions officers’ final decisions remained virtually unchanged. The results suggest that, under institutional constraints, human experts can effectively resist arbitrary algorithmic outputs, demonstrating remarkable robustness in their professional judgment against algorithmic uncertainty.
📝 Abstract
Algorithmic predictions are inherently uncertain: even models with similar aggregate accuracy can produce different predictions for the same individual, raising concerns that high-stakes decisions may become sensitive to arbitrary modeling choices. In this paper, we define algorithmic reliance as the extent to which a decision outcome depends on whether a more favorable versus less favorable algorithmic prediction is presented to the decision-maker. We estimate this in a randomized field experiment (n=19,545) embedded in a selective U.S. college admissions cycle, in which admissions officers reviewed each application alongside an algorithmic score while we randomly varied whether the score came from one of two similarly accurate prediction models. Although the two models performed similarly in aggregate, they frequently assigned different scores to the same applicant, creating exogenous variation in the score shown. Surprisingly, we find little evidence of algorithmic reliance: presenting a more favorable score does not meaningfully increase an applicant's probability of admission on average, even when the models disagree substantially. These findings suggest that, in this expert, high-stakes setting, human decision-making is largely invariant to arbitrary variation in algorithmic predictions, underscoring the role of professional discretion and institutional context in mediating the downstream effects of algorithmic uncertainty.