How Language Models Conflate Logical Validity with Plausibility: A Representational Analysis of Content Effects

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit the “content effect”—a well-documented bias where logical validity is conflated with semantic plausibility, undermining formal reasoning. Method: Through representation analysis, linear probing, and causal intervention experiments, we identify a strong linear alignment between logical validity and semantic plausibility in the hidden-layer representation space—the representational origin of the content effect. We then propose interpretable steering vectors to explicitly disentangle abstract logical structure from surface-level semantics via targeted representation editing. Contribution/Results: Our method significantly mitigates the content effect, improving accuracy across multiple logic reasoning benchmarks by an average of +8.2%. Behavioral ablation studies confirm its causal efficacy. This work establishes a novel paradigm for understanding and enhancing LLMs’ formal reasoning capabilities, offering both theoretical insight into representational geometry and a practical, controllable intervention mechanism grounded in geometric interpretability.

Technology Category

Application Category

📝 Abstract
Both humans and large language models (LLMs) exhibit content effects: biases in which the plausibility of the semantic content of a reasoning problem influences judgments regarding its logical validity. While this phenomenon in humans is best explained by the dual-process theory of reasoning, the mechanisms behind content effects in LLMs remain unclear. In this work, we address this issue by investigating how LLMs encode the concepts of validity and plausibility within their internal representations. We show that both concepts are linearly represented and strongly aligned in representational geometry, leading models to conflate plausibility with validity. Using steering vectors, we demonstrate that plausibility vectors can causally bias validity judgements, and vice versa, and that the degree of alignment between these two concepts predicts the magnitude of behavioral content effects across models. Finally, we construct debiasing vectors that disentangle these concepts, reducing content effects and improving reasoning accuracy. Our findings advance understanding of how abstract logical concepts are represented in LLMs and highlight representational interventions as a path toward more logical systems.
Problem

Research questions and friction points this paper is trying to address.

Analyzing how language models confuse logical validity with plausibility
Investigating content effects in LLM reasoning through representational geometry
Developing debiasing methods to separate validity from plausibility in models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing linear representations of validity and plausibility
Using steering vectors to causally manipulate judgments
Constructing debiasing vectors to disentangle concepts
🔎 Similar Papers
No similar papers found.
L
Leonardo Bertolazzi
University of Trento
Sandro Pezzelle
Sandro Pezzelle
Assistant Professor at ILLC, University of Amsterdam
Natural Language ProcessingMultimodal Machine LearningAICognitive science
R
Raffaelle Bernardi
Free University of Bozen-Bolzano