π€ AI Summary
This study addresses the challenge of efficiently selecting optimal configurations in service system design using textual evidence such as customer support dialogues while minimizing human review costs. The problem is formulated as a sequential decision-making process, and the authors propose the PP-LUCB algorithm, which leverages large language models to generate biased proxy scores, incorporates selective human auditing, and corrects for bias through inverse propensity-weighted residuals combined with anytime-valid confidence sequences to dynamically refine the auditing policy. Theoretical analysis establishes the algorithmβs correctness under arm-dependent biases and demonstrates near-optimal sample efficiency. Empirical results on a customer support ticket classification task show that PP-LUCB accurately identifies the optimal configuration in all 40 experimental trials while reducing human review costs by 90%.
π Abstract
Designing service systems requires selecting among alternative configurations -- choosing the best chatbot variant, the optimal routing policy, or the most effective quality control procedure. In many service systems, the primary evidence of performance quality is textual -- customer support transcripts, complaint narratives, compliance review reports -- rather than the scalar measurements assumed by classical optimization methods. Large language models (LLMs) can read such textual evidence and produce standardized quality scores, but these automated judges exhibit systematic biases that vary across alternatives and evaluation instances. Human expert review remains accurate but costly. We study how to identify the best service configuration with high confidence while minimizing expensive human audits, given that automated evaluation is cheap but biased. We formalize this as a sequential decision problem where a biased proxy score is observed for every evaluation, and a verified outcome can be acquired selectively at additional cost. We prove that LLM-only selection fails under arm-dependent bias, and that naive selective-audit estimators can be asymptotically biased. We develop an estimator combining proxy scores with inverse-propensity-weighted residuals and construct anytime-valid confidence sequences. Our algorithm, PP-LUCB, jointly decides which alternatives to evaluate and whether to request human audits, concentrating reviews where the LLM judge is least reliable. We prove correctness and establish instance-dependent cost bounds showing near-optimal efficiency. On a customer support ticket classification task, our algorithm correctly identifies the best model in 40/40 trials while achieving 90\% audit cost reduction.