🤖 AI Summary
This study addresses the susceptibility of large language models (LLMs) to framing effects—cognitive biases arising from variations in prompt wording—when deployed as evaluators, which can lead to unstable and unfair judgments. Introducing the psychological concept of framing into LLM evaluation for the first time, the work proposes a symmetric prompting design that leverages predicate-level positive–negative symmetry, multi-model comparison, and systematic bias analysis. Empirical validation across four high-stakes tasks demonstrates that all 14 mainstream LLMs examined exhibit significant vulnerability to framing bias, with distinct model families displaying consistent directional tendencies. The findings reveal framing bias as a structural flaw in current LLM-based evaluation systems, underscoring the urgent need for robust, bias-resistant evaluation protocols.
📝 Abstract
Large language models (LLMs) are known to produce varying responses depending on prompt phrasing, indicating that subtle guidance in phrasing can steer their answers. However, the impact of this framing bias on LLM-based evaluation, where models are expected to make stable and impartial judgments, remains largely underexplored. Drawing inspiration from the framing effect in psychology, we systematically investigate how deliberate prompt framing skews model judgments across four high-stakes evaluation tasks. We design symmetric prompts using predicate-positive and predicate-negative constructions and demonstrate that such framing induces significant discrepancies in model outputs. Across 14 LLM judges, we observe clear susceptibility to framing, with model families showing distinct tendencies toward agreement or rejection. These findings suggest that framing bias is a structural property of current LLM-based evaluation systems, underscoring the need for framing-aware protocols.