Question Tokens Deserve More Attention: Enhancing Large Language Models without Training through Step-by-Step Reading and Question Attention Recalibration

๐Ÿ“… 2025-04-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) struggle with modeling long-range dependencies and attending to critical tokens during complex problem understanding and multi-step reasoning. Method: This paper proposes a training-free, dual-path approach comprising prompt engineering and inference-time attention recalibration. We introduce Step-by-Step Reading (SSR/SSR+/SSR++), a novel prompting strategy that decomposes comprehension into sequential reading steps; we further uncoverโ€” for the first timeโ€”the impact of repeated question tokens and backward dependencies on understanding performance. Additionally, we propose dynamic question-aware attention reweighting, a parameter-free technique that recalibrates attention distributions during inference. Results: SSR++ achieves state-of-the-art accuracy on GSM8K (96.66%), ASDiv (94.61%), and AQuA (76.28%). Attention recalibration boosts LLaMA-3.1-8Bโ€™s AQuA performance by +5.17%, significantly enhancing critical information capture and long-range reasoning capability.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) often struggle with tasks that require a deep understanding of complex questions, especially when faced with long-range dependencies or multi-step reasoning. This work investigates the limitations of current LLMs in question comprehension and identifies three insights: (1) repeating question tokens improves comprehension by increasing attention to question regions, (2) increased backward dependencies negatively affect performance due to unidirectional attentional constraints, and (3) recalibrating attentional mechanisms to prioritize question-relevant regions improves performance. Based on these findings, we first propose a family of prompt-based strategies - Step-by-Step Reading (SSR), SSR+, and SSR++ - that guide LLMs to incrementally process question tokens and align their reasoning with the input structure. These methods significantly improve performance, with SSR++ achieving state-of-the-art results on several benchmarks: 96.66% on GSM8K, 94.61% on ASDiv, and 76.28% on AQuA. Second, we introduce a training-free attention recalibration mechanism that dynamically adjusts attention distributions during inference to emphasize question-relevant regions. This method improves the accuracy of LLaMA 3.1-8B on AQuA by 5.17% without changing model parameters or input prompts. Taken together, our results highlight the importance of structured prompt design and attention optimization in improving LLM comprehension, providing lightweight yet effective tools for improving performance in various NLP tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLMs' complex question comprehension without training
Improving attention to question tokens for better reasoning
Dynamic attention recalibration for question-relevant regions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Step-by-Step Reading prompts enhance LLM comprehension
Attention recalibration prioritizes question-relevant regions
Training-free dynamic adjustment during inference
๐Ÿ”Ž Similar Papers
No similar papers found.