Testing for LLM response differences: the case of a composite null consisting of semantically irrelevant query perturbations

📅 2025-09-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional statistical tests for large language models (LLMs) often misattribute distributional differences in model responses to semantically equivalent queries—those differing only in superficial syntactic or lexical form—leading to spurious rejections of response equivalence. Method: We propose a semantic-robust hypothesis testing framework that constructs a composite null hypothesis over sets of semantically similar queries, rather than testing individual query pairs in isolation. This mitigates oversensitivity to irrelevant surface-level perturbations. Our approach leverages response distribution estimation and asymptotically efficient binary hypothesis testing theory, ensuring statistical consistency under finite sampling budgets. Contribution/Results: Experiments on both synthetic and real-world LLM deployments demonstrate that our method significantly improves robustness: it correctly identifies response distribution equivalence across semantically identical queries while better aligning with users’ practical notion of functional equivalence. The framework achieves a principled balance between statistical rigor and practical deployability.

Technology Category

Application Category

📝 Abstract
Given an input query, generative models such as large language models produce a random response drawn from a response distribution. Given two input queries, it is natural to ask if their response distributions are the same. While traditional statistical hypothesis testing is designed to address this question, the response distribution induced by an input query is often sensitive to semantically irrelevant perturbations to the query, so much so that a traditional test of equality might indicate that two semantically equivalent queries induce statistically different response distributions. As a result, the outcome of the statistical test may not align with the user's requirements. In this paper, we address this misalignment by incorporating into the testing procedure consideration of a collection of semantically similar queries. In our setting, the mapping from the collection of user-defined semantically similar queries to the corresponding collection of response distributions is not known a priori and must be estimated, with a fixed budget. Although the problem we address is quite general, we focus our analysis on the setting where the responses are binary, show that the proposed test is asymptotically valid and consistent, and discuss important practical considerations with respect to power and computation.
Problem

Research questions and friction points this paper is trying to address.

Tests LLM response differences under semantic perturbations
Addresses misalignment between statistical tests and user needs
Proposes valid test for semantically similar query distributions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic similarity-based hypothesis testing
Asymptotically valid test for binary responses
Fixed-budget estimation of response distributions
🔎 Similar Papers
No similar papers found.
A
Aranyak Acharyya
Mathematical Institute for Data Science, Johns Hopkins University, Baltimore, MD 21218, USA
Carey E. Priebe
Carey E. Priebe
Professor of Applied Mathematics and Statistics, Johns Hopkins University
statistical inference for high-dimensional and graph data
H
Hayden S. Helm
Helivan, San Francisco, CA 94123, USA