🤖 AI Summary
This work investigates whether test-time chain-of-thought (CoT) expansion improves factual accuracy of large language models (LLMs) in non-mathematical open-domain question answering. We propose a three-stage method: (1) distilling reasoning trajectories from state-of-the-art models; (2) augmenting reasoning with knowledge graph (KG) paths to strengthen factual grounding; and (3) integrating multi-scale instruction tuning (based on the Qwen2.5 series) with test-time computation/token budget expansion. Our systematic evaluation—comprising 168 experimental configurations and analyzing 1.7 million reasoning traces—demonstrates, for the first time, that test-time CoT expansion consistently boosts factual accuracy by +2–8% in non-mathematical open-domain QA, with greater gains observed in smaller models. We publicly release all code, data, and reasoning trajectories. Core contributions include: (i) establishing the efficacy of reasoning expansion beyond mathematical domains, and (ii) introducing KG-path injection as an interpretable, factually constrained augmentation paradigm.
📝 Abstract
Recent studies on large language model (LLM) reasoning capabilities have demonstrated promising improvements in model performance by leveraging a lengthy thinking process and additional computational resources during inference, primarily in tasks involving mathematical reasoning (Muennighoff et al., 2025). However, it remains uncertain if longer reasoning chains inherently enhance factual accuracy, particularly beyond mathematical contexts. In this work, we thoroughly examine LLM reasoning within complex open-domain question-answering (QA) scenarios. We initially distill reasoning traces from advanced, large-scale reasoning models (QwQ-32B and DeepSeek-R1-671B), then fine-tune a variety of models ranging from smaller, instruction-tuned variants to larger architectures based on Qwen2.5. To enrich reasoning traces, we introduce factual information from knowledge graphs in the form of paths into our reasoning traces. Our experimental setup includes four baseline approaches and six different instruction-tuned models evaluated across a benchmark of six datasets, encompassing over 22.6K questions. Overall, we carry out 168 experimental runs and analyze approximately 1.7 million reasoning traces. Our findings indicate that, within a single run, smaller reasoning models achieve noticeable improvements in factual accuracy compared to their original instruction-tuned counterparts. Moreover, our analysis demonstrates that adding test-time compute and token budgets factual accuracy consistently improves by 2-8%, further confirming the effectiveness of test-time scaling for enhancing performance and consequently improving reasoning accuracy in open-domain QA tasks. We release all the experimental artifacts for further research.