Abstract Activation Spaces for Content-Invariant Reasoning in Large Language Models

πŸ“… 2026-02-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models are prone to semantic plausibility interference in syllogistic reasoning, often conflating formal validity with content credibility and thereby exhibiting systematic biases. To address this, this work proposes an activation-level abstraction guidance mechanism: by constructing paired abstract-concrete syllogisms, it defines an abstract reasoning space and introduces a lightweight Abstractor module that intervenes in the forward pass across multiple residual stream layers to decouple structural inference from lexical semantics. This approach significantly suppresses semantic interference, effectively reduces content-induced errors in cross-lingual evaluations, and enhances the model’s sensitivity to formal validity and overall reasoning robustness.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) often struggle with deductive judgment in syllogistic reasoning, systematically conflating semantic plausibility with formal validity a phenomenon known as content effect. This bias persists even when models generate step-wise explanations, indicating that intermediate rationales may inherit the same semantic shortcuts that affect answers. Recent approaches propose mitigating this issue by increasing inference-time structural constraints, either by encouraging abstract intermediate representations or by intervening directly in the model's internal computations; however, reliably suppressing semantic interference remains an open challenge. To make formal deduction less sensitive to semantic content, we introduce a framework for abstraction-guided reasoning that explicitly separates structural inference from lexical semantics. We construct paired content-laden and abstract syllogisms and use the model's activations on abstract inputs to define an abstract reasoning space. We then learn lightweight Abstractors that, from content-conditioned residual-stream states, predict representations aligned with this space and integrate these predictions via multi-layer interventions during the forward pass. Using cross-lingual transfer as a test bed, we show that abstraction-aligned steering reduces content-driven errors and improves validity-sensitive performance. Our results position activation-level abstraction as a scalable mechanism for enhancing the robustness of formal reasoning in LLMs against semantic interference.
Problem

Research questions and friction points this paper is trying to address.

content effect
syllogistic reasoning
semantic interference
formal validity
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

activation abstraction
content-invariant reasoning
syllogistic reasoning
semantic interference
multi-layer intervention
πŸ”Ž Similar Papers
G
Gabriele Maraia
Human Centric ART, University of Rome Tor Vergata
Marco Valentino
Marco Valentino
University of Sheffield
Natural Language ProcessingNeurosymbolic AIExplanation
F
F. Zanzotto
Human Centric ART, University of Rome Tor Vergata; Almawave S.p.A.
Leonardo Ranaldi
Leonardo Ranaldi
University of Edinburgh
Natural Language ProcessingMachine LearningArtificial Intelligence