If Probable, Then Acceptable? Understanding Conditional Acceptability Judgments in Large Language Models

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large language models (LLMs) assess the acceptability of “if A, then B” conditionals, focusing on two cognitive dimensions: conditional probability and semantic relatedness. Methodologically, we conduct controlled experiments across diverse model architectures (decoder-only vs. encoder-decoder), scales (0.5B–70B parameters), and prompting strategies (zero-shot, few-shot, chain-of-thought), employing linear mixed-effects modeling and ANOVA to quantify response patterns. Results reveal that while LLMs exhibit rudimentary sensitivity to both probabilistic and semantic cues, their judgment consistency falls significantly short of human performance; notably, scaling model size does not substantially improve alignment with human judgments. This work uncovers a critical dissociation in current LLMs between superficial statistical pattern matching and deep semantic understanding in conditional reasoning. It establishes a novel evaluation paradigm and an empirical benchmark for assessing and advancing the logical reasoning capabilities of foundation models.

Technology Category

Application Category

📝 Abstract
Conditional acceptability refers to how plausible a conditional statement is perceived to be. It plays an important role in communication and reasoning, as it influences how individuals interpret implications, assess arguments, and make decisions based on hypothetical scenarios. When humans evaluate how acceptable a conditional"If A, then B"is, their judgments are influenced by two main factors: the $ extit{conditional probability}$ of $B$ given $A$, and the $ extit{semantic relevance}$ of the antecedent $A$ given the consequent $B$ (i.e., whether $A$ meaningfully supports $B$). While prior work has examined how large language models (LLMs) draw inferences about conditional statements, it remains unclear how these models judge the $ extit{acceptability}$ of such statements. To address this gap, we present a comprehensive study of LLMs'conditional acceptability judgments across different model families, sizes, and prompting strategies. Using linear mixed-effects models and ANOVA tests, we find that models are sensitive to both conditional probability and semantic relevance-though to varying degrees depending on architecture and prompting style. A comparison with human data reveals that while LLMs incorporate probabilistic and semantic cues, they do so less consistently than humans. Notably, larger models do not necessarily align more closely with human judgments.
Problem

Research questions and friction points this paper is trying to address.

Understanding how LLMs judge conditional acceptability of if-then statements
Investigating LLM sensitivity to conditional probability and semantic relevance
Comparing LLM conditional judgments with human reasoning patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated conditional acceptability in LLMs using probability
Assessed semantic relevance impact via mixed-effects models
Compared model-human judgment alignment across architectures
🔎 Similar Papers
No similar papers found.