🤖 AI Summary
Traditional statistical tests for large language models (LLMs) often misattribute distributional differences in model responses to semantically equivalent queries—those differing only in superficial syntactic or lexical form—leading to spurious rejections of response equivalence.
Method: We propose a semantic-robust hypothesis testing framework that constructs a composite null hypothesis over sets of semantically similar queries, rather than testing individual query pairs in isolation. This mitigates oversensitivity to irrelevant surface-level perturbations. Our approach leverages response distribution estimation and asymptotically efficient binary hypothesis testing theory, ensuring statistical consistency under finite sampling budgets.
Contribution/Results: Experiments on both synthetic and real-world LLM deployments demonstrate that our method significantly improves robustness: it correctly identifies response distribution equivalence across semantically identical queries while better aligning with users’ practical notion of functional equivalence. The framework achieves a principled balance between statistical rigor and practical deployability.
📝 Abstract
Given an input query, generative models such as large language models produce a random response drawn from a response distribution. Given two input queries, it is natural to ask if their response distributions are the same. While traditional statistical hypothesis testing is designed to address this question, the response distribution induced by an input query is often sensitive to semantically irrelevant perturbations to the query, so much so that a traditional test of equality might indicate that two semantically equivalent queries induce statistically different response distributions. As a result, the outcome of the statistical test may not align with the user's requirements. In this paper, we address this misalignment by incorporating into the testing procedure consideration of a collection of semantically similar queries. In our setting, the mapping from the collection of user-defined semantically similar queries to the corresponding collection of response distributions is not known a priori and must be estimated, with a fixed budget. Although the problem we address is quite general, we focus our analysis on the setting where the responses are binary, show that the proposed test is asymptotically valid and consistent, and discuss important practical considerations with respect to power and computation.