🤖 AI Summary
This work addresses the problem of constructing minimal valid prediction sets for black-box large language models (e.g., CodeLlama, GPT) in code and mathematical text generation, where validity is defined by user-specified acceptability criteria—such as at least one output passing all test cases—and coverage of the true solution must be guaranteed with a pre-specified confidence level (e.g., 90%). We formulate generative prediction set construction within the conformal regression framework for the first time, leveraging the distributional structure of the minimal sampling number to achieve theoretically provable statistical coverage guarantees and compact set sizes. Our method operates solely via black-box model sampling—requiring no gradient access or internal parameter inspection. Evaluated on multiple code and math word problem benchmarks, our approach reduces average prediction set size by 35% at 90% confidence compared to state-of-the-art methods, while strictly satisfying the validity constraint.
📝 Abstract
We consider the problem of generating valid and small prediction sets by sampling outputs (e.g., software code and natural language text) from a black-box deep generative model for a given input (e.g., textual prompt). The validity of a prediction set is determined by a user-defined binary admissibility function depending on the target application. For example, requiring at least one program in the set to pass all test cases in code generation application. To address this problem, we develop a simple and effective conformal inference algorithm referred to as Generative Prediction Sets (GPS). Given a set of calibration examples and black-box access to a deep generative model, GPS can generate prediction sets with provable guarantees. The key insight behind GPS is to exploit the inherent structure within the distribution over the minimum number of samples needed to obtain an admissible output to develop a simple conformal regression approach over the minimum number of samples. Experiments on multiple datasets for code and math word problems using different large language models demonstrate the efficacy of GPS over state-of-the-art methods.