🤖 AI Summary
This work addresses the misalignment between human evaluations and model-based risk assessments in risk-sensitive applications of large language models (LLMs), caused by tail-event undesirable outputs (e.g., toxic or offensive responses). We propose a lightweight black-box calibration framework that—uniquely—integrates conformal prediction with L-statistics theory to provide provably valid statistical guarantees for arbitrary quantile-weighted risk measures, overcoming the theoretical limitations of prior heuristic approaches. The framework achieves ≥95% coverage guarantees across diverse LLM risk evaluation tasks, substantially mitigating human–model scoring discrepancies. Crucially, it incurs less than 0.5% additional inference latency, ensuring both statistical rigor and practical deployability.
📝 Abstract
Recent developments in large language models (LLMs) have led to their widespread usage for various tasks. The prevalence of LLMs in society implores the assurance on the reliability of their performance. In particular, risk-sensitive applications demand meticulous attention to unexpectedly poor outcomes, i.e., tail events, for instance, toxic answers, humiliating language, and offensive outputs. Due to the costly nature of acquiring human annotations, general-purpose scoring models have been created to automate the process of quantifying these tail events. This phenomenon introduces potential human-machine misalignment between the respective scoring mechanisms. In this work, we present a lightweight calibration framework for blackbox models that ensures the alignment of humans and machines with provable guarantees. Our framework provides a rigorous approach to controlling any distortion risk measure that is characterized by a weighted average of quantiles of the loss incurred by the LLM with high confidence. The theoretical foundation of our method relies on the connection between conformal risk control and a traditional family of statistics, i.e., L-statistics. To demonstrate the utility of our framework, we conduct comprehensive experiments that address the issue of human-machine misalignment.