π€ AI Summary
Large language models often fail in multi-hop question answering due to positional bias, which hinders effective integration of dispersed evidence. This work proposes the Multi-Focus Attention Instruction (MFAI) probe to decouple evidence identification from synthesis, enabling systematic diagnosis of reasoning weaknesses. The study reveals a βweakest-link lawβ: multi-hop performance is constrained by the least visible evidence, and uncovers the dual nature of attention guidance. Integrating semantic probing, attention manipulation, and a System-2 reasoning architecture, experiments on MuSiQue and NeoQA demonstrate up to an 11.5% accuracy gain for low-visibility positions, with performance predominantly governed by absolute position (variance <3%). Notably, reasoning-capable models achieve performance comparable to the gold standard.
π Abstract
Despite scaling to massive context windows, Large Language Models (LLMs) struggle with multi-hop reasoning due to inherent position bias, which causes them to overlook information at certain positions. Whether these failures stem from an inability to locate evidence (recognition failure) or integrate it (synthesis failure) is unclear. We introduce Multi-Focus Attention Instruction (MFAI), a semantic probe to disentangle these mechanisms by explicitly steering attention towards selected positions. Across 5 LLMs on two multi-hop QA tasks (MuSiQue and NeoQA), we establish the"Weakest Link Law": multi-hop reasoning performance collapses to the performance level of the least visible evidence. Crucially, this failure is governed by absolute position rather than the linear distance between facts (performance variance $<3%$). We further identify a duality in attention steering: while matched MFAI resolves recognition bottlenecks, improving accuracy by up to 11.5% in low-visibility positions, misleading MFAI triggers confusion in real-world tasks but is successfully filtered in synthetic tasks. Finally, we demonstrate that"thinking"models that utilize System-2 reasoning, effectively locate and integrate the required information, matching gold-only baselines even in noisy, long-context settings.