🤖 AI Summary
Large language models (LLMs) struggle with quantitatively assessing the coherence of reasoning chains in multi-step logical inference. To address this, we propose a lightweight, attention-based consistency quantification method grounded in the Transformer architecture: for the first time, we directly leverage query-key (QK) alignment scores from specific attention heads to measure the logical plausibility between reasoning steps—requiring no fine-tuning, external annotations, or post-hoc processing, and yielding interpretable consistency scores in a single forward pass. Through systematic attention analysis and head selection strategies, we validate our method across multiple benchmarks—including LogiQA and ReClor—across model scales from 1.5B to 70B parameters. Results demonstrate substantial improvements in robustness against distractors and discriminative capability for deep logical reasoning, outperforming conventional ablation-based evaluation approaches.
📝 Abstract
Large language models (LLMs) have demonstrated impressive performance in various natural language processing tasks, yet their ability to perform multi-step logical reasoning remains an open challenge. Although Chain-of-Thought prompting has improved logical reasoning by enabling models to generate intermediate steps, it lacks mechanisms to assess the coherence of these logical transitions. In this paper, we propose a novel, lightweight evaluation strategy for logical reasoning that uses query-key alignments inside transformer attention heads. By computing a single forward pass and extracting a"QK-score"from carefully chosen heads, our method reveals latent representations that reliably separate valid from invalid inferences, offering a scalable alternative to traditional ablation-based techniques. We also provide an empirical validation on multiple logical reasoning benchmarks, demonstrating improved robustness of our evaluation method against distractors and increased reasoning depth. The experiments were conducted on a diverse set of models, ranging from 1.5B to 70B parameters.