Do Large Language Models Perform Latent Multi-Hop Reasoning without Exploiting Shortcuts?

📅 2024-11-25
🏛️ arXiv.org
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether large language models (LLMs) can implicitly perform multi-hop factual reasoning—without explicit chain-of-thought prompting—while avoiding reliance on statistical shortcuts (e.g., head-tail entity co-occurrence) present in training data. Method: We introduce SOCRATES, the first shortcut-resistant benchmark for implicit reasoning, constructed via relation-driven candidate selection, co-occurrence filtering, and implicit representation analysis to rigorously control confounding factors. Contribution/Results: We discover, for the first time, that implicit multi-hop composability is highly sensitive to the semantic type of intermediate answers: 80% for country-type intermediates versus only 5% for year-type ones. We further demonstrate that this capability emerges naturally during pretraining but remains substantially weaker than explicit chain-of-thought reasoning. Our findings establish the genuine existence, type-specificity, and inherent limitations of implicit reasoning in LLMs, providing both a novel evaluation benchmark and an analytical framework for future research.

Technology Category

Application Category

📝 Abstract
We evaluate how well Large Language Models (LLMs) latently recall and compose facts to answer multi-hop queries like"In the year Scarlett Johansson was born, the Summer Olympics were hosted in the country of". One major challenge in such evaluation is that LLMs may have developed shortcuts by encountering the head entity"Scarlett Johansson"and the answer entity"United States"in the same training sequences or merely guess the answer based on frequency-based priors. To prevent shortcuts, we exclude test queries where the head and answer entities might have co-appeared during training. Through careful selection of relations and facts and systematic removal of cases where models might guess answers or exploit partial matches, we construct an evaluation dataset SOCRATES (ShOrtCut-fRee lATent rEaSoning). We observe that LLMs demonstrate promising latent multi-hop reasoning abilities without exploiting shortcuts, but only for certain types of queries. For queries requiring latent recall of countries as the intermediate answer, the best models achieve 80% latent composability, but this drops to just 5% for the recall of years. Comparisons with Chain-of-Thought highlight a significant gap between the ability of models to reason latently versus explicitly. Analysis reveals that latent representations of the intermediate answer are constructed more often in queries with higher latent composability, and shows the emergence of latent multi-hop reasoning during pretraining.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLMs' latent multi-hop reasoning without shortcuts
Construct shortcut-free dataset SOCRATES for reliable testing
Analyze gaps between latent and explicit reasoning abilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructs SOCRATES dataset to prevent shortcuts
Evaluates latent multi-hop reasoning in LLMs
Compares latent versus explicit reasoning abilities
🔎 Similar Papers
No similar papers found.