Improving Chain-of-Thought for Logical Reasoning via Attention-Aware Intervention

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes an end-to-end, non-interactive attention-aware intervention (AAI) method to enhance logical reasoning in large language models without relying on external symbolic solvers or interactive frameworks. The approach identifies and leverages a previously unobserved alignment between specific attention heads and logical operators, enabling the injection of structural information into few-shot prompts. During inference, AAI reweights these operator-aligned attention heads to guide the model toward more effective use of prior knowledge—without any additional training. Evaluated across multiple logical reasoning benchmarks and diverse model architectures, the method achieves significant performance gains with negligible computational overhead while simultaneously improving the interpretability of the reasoning process.

Technology Category

Application Category

📝 Abstract
Modern logical reasoning with LLMs primarily relies on employing complex interactive frameworks that decompose the reasoning process into subtasks solved through carefully designed prompts or requiring external resources (e.g., symbolic solvers) to exploit their strong logical structures. While interactive approaches introduce additional overhead or depend on external components, which limit their scalability. In this work, we introduce a non-interactive, end-to-end framework for reasoning tasks, enabling reasoning to emerge within the model itself-improving generalization while preserving analyzability without any external resources. We show that introducing structural information into the few-shot prompt activates a subset of attention heads that patterns aligned with logical reasoning operators. Building on this insight, we propose Attention-Aware Intervention (AAI), an inference-time intervention method that reweights attention scores across selected heads identified by their logical patterns. AAI offers an efficient way to steer the model's reasoning toward leveraging prior knowledge through attention modulation. Extensive experiments show that AAI enhances logical reasoning performance across diverse benchmarks, and model architectures, while incurring negligible additional computational overhead. Code is available at https://github.com/phuongnm94/aai_for_logical_reasoning.
Problem

Research questions and friction points this paper is trying to address.

logical reasoning
large language models
non-interactive framework
scalability
end-to-end reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention-Aware Intervention
Chain-of-Thought
Logical Reasoning
Attention Heads
End-to-End Reasoning
P
Phuong Minh Nguyen
RebelsNLU Lab, IS school, Japan Advanced Institute of Science and Technology
D
Dang Huu-Tien
RebelsNLU Lab, IS school, Japan Advanced Institute of Science and Technology
Naoya Inoue
Naoya Inoue
Japan Advanced Institute of Science and Technology (JAIST) / RIKEN AIP
Interpretability/Explainable AICommonsense ReasoningReading ComprehensionArgumentation