🤖 AI Summary
This work addresses the lack of efficient, low-cost sufficiency assessment mechanisms for large language models (LLMs) during input processing. We propose CLOTHO—the first task-adaptive, pre-generation input sufficiency metric. Unlike prior approaches, CLOTHO requires neither output generation nor human annotations; instead, it models input difficulty via LLM hidden states, clusters unlabeled inputs using a Gaussian Mixture Model (GMM), automatically identifies information-rich samples to construct a reference set, and quantifies failure risk for unseen inputs. It achieves effective modeling with only 5.4% labeled data, and its scores are transferable across models—including proprietary ones. Evaluated on eight tasks and three open-source LLMs, CLOTHO attains a mean ROC-AUC of 0.716. When applied to closed-source models, it increases detected failure cases per 100 inputs from 18.7 to 42.5—substantially outperforming random baselines.
📝 Abstract
Software increasingly relies on the emergent capabilities of Large Language Models (LLMs), from natural language understanding to program analysis and generation. Yet testing them on specific tasks remains difficult and costly: many prompts lack ground truth, forcing reliance on human judgment, while existing uncertainty and adequacy measures typically require full inference. A key challenge is to assess input adequacy in a way that reflects the demands of the task, ideally before even generating any output. We introduce CLOTHO, a task-specific, pre-generation adequacy measure that estimates input difficulty directly from hidden LLM states. Given a large pool of unlabelled inputs for a specific task, CLOTHO uses a Gaussian Mixture Model (GMM) to adaptively sample the most informative cases for human labelling. Based on this reference set the GMM can then rank unseen inputs by their likelihood of failure. In our empirical evaluation across eight benchmark tasks and three open-weight LLMs, CLOTHO can predict failures with a ROC-AUC of 0.716, after labelling reference sets that are on average only 5.4% of inputs. It does so without generating any outputs, thereby reducing costs compared to existing uncertainty measures. Comparison of CLOTHO and post-generation uncertainty measures shows that the two approaches complement each other. Crucially, we show that adequacy scores learnt from open-weight LLMs transfer effectively to proprietary models, extending the applicability of the approach. When prioritising test inputs for proprietary models, CLOTHO increases the average number of failing inputs from 18.7 to 42.5 out of 100, compared to random prioritisation.