When Should Humans Step In? Optimal Human Dispatching in AI-Assisted Decisions

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently allocating limited human intervention resources in AI-assisted decision-making to correct the AI outputs that most significantly impact final decisions. The authors propose a decision-theoretic framework that, for the first time, formulates human intervention as an optional information acquisition problem, treating AI predictions as factor-level signals and human judgments as costly yet optional sources of information. By integrating factor importance with residual variance, they derive a closed-form optimal scheduling rule that simultaneously ensures computational efficiency and optimizes predictive performance. The approach accommodates both contextual and non-contextual policies and is compatible with nonparametric and linear reward estimators. Evaluated on an AI-assisted peer review task, the method achieves accuracy approaching that of fully human review using only 20–30% human intervention, substantially outperforming baseline approaches relying solely on large language models.

Technology Category

Application Category

📝 Abstract
AI systems increasingly assist human decision making by producing preliminary assessments of complex inputs. However, such AI-generated assessments can often be noisy or systematically biased, raising a central question: how should costly human effort be allocated to correct AI outputs where it matters the most for the final decision? We propose a general decision-theoretic framework for human-AI collaboration in which AI assessments are treated as factor-level signals and human judgments as costly information that can be selectively acquired. We consider cases where the optimal selection problem reduces to maximizing a reward associated with each candidate subset of factors, and turn policy design into reward estimation. We develop estimation procedures under both nonparametric and linear models, covering contextual and non-contextual selection rules. In the linear setting, the optimal rule admits a closed-form expression with a clear interpretation in terms of factor importance and residual variance. We apply our framework to AI-assisted peer review. Our approach substantially outperforms LLM-only predictions and achieves performance comparable to full human review while using only 20-30% of the human information. Across different selection rules, we find that simpler rules derived under linear models can significantly reduce computational cost without harming final prediction performance. Our results highlight both the value of human intervention and the efficiency of principled dispatching.
Problem

Research questions and friction points this paper is trying to address.

human-AI collaboration
optimal human dispatching
AI-assisted decisions
costly human judgment
decision-theoretic framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

human-AI collaboration
optimal dispatching
decision-theoretic framework
selective human intervention
reward estimation
L
Lezhi Tan
Stanford University, USA
Naomi Sagan
Naomi Sagan
EE PhD Student, Stanford University
L
Lihua Lei
Stanford University, USA
J
José Blanchet
Stanford University, USA