π€ AI Summary
Large language models are prone to semantic plausibility interference in syllogistic reasoning, often conflating formal validity with content credibility and thereby exhibiting systematic biases. To address this, this work proposes an activation-level abstraction guidance mechanism: by constructing paired abstract-concrete syllogisms, it defines an abstract reasoning space and introduces a lightweight Abstractor module that intervenes in the forward pass across multiple residual stream layers to decouple structural inference from lexical semantics. This approach significantly suppresses semantic interference, effectively reduces content-induced errors in cross-lingual evaluations, and enhances the modelβs sensitivity to formal validity and overall reasoning robustness.
π Abstract
Large Language Models (LLMs) often struggle with deductive judgment in syllogistic reasoning, systematically conflating semantic plausibility with formal validity a phenomenon known as content effect. This bias persists even when models generate step-wise explanations, indicating that intermediate rationales may inherit the same semantic shortcuts that affect answers. Recent approaches propose mitigating this issue by increasing inference-time structural constraints, either by encouraging abstract intermediate representations or by intervening directly in the model's internal computations; however, reliably suppressing semantic interference remains an open challenge. To make formal deduction less sensitive to semantic content, we introduce a framework for abstraction-guided reasoning that explicitly separates structural inference from lexical semantics. We construct paired content-laden and abstract syllogisms and use the model's activations on abstract inputs to define an abstract reasoning space. We then learn lightweight Abstractors that, from content-conditioned residual-stream states, predict representations aligned with this space and integrate these predictions via multi-layer interventions during the forward pass. Using cross-lingual transfer as a test bed, we show that abstraction-aligned steering reduces content-driven errors and improves validity-sensitive performance. Our results position activation-level abstraction as a scalable mechanism for enhancing the robustness of formal reasoning in LLMs against semantic interference.