T$^2$: An Adaptive Test-Time Scaling Strategy for Contextual Question Answering

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) in conversational question answering (CQA) suffer from poor adaptability due to fixed inference depth, while existing test-time adaptation methods introduce human-induced bias. Method: We propose T², a dynamic inference-depth control framework centered on the novel “Think-to-Think” mechanism. It automatically assesses question complexity via structural parsing, augments prompts with semantically similar examples, and selects and transfers inference strategies through multi-criteria evaluation—namely consistency, conciseness, and verifiability—ensuring unbiased, adaptive reasoning. Contribution/Results: T² eliminates reliance on manual heuristics and premature termination constraints. Evaluated on seven CQA benchmarks, it achieves average accuracy gains over strong baselines while reducing computational overhead by up to 25.2%, demonstrating superior trade-offs between performance and inference efficiency.

Technology Category

Application Category

📝 Abstract
Recent advances in Large Language Models (LLMs) have demonstrated remarkable performance in Contextual Question Answering (CQA). However, prior approaches typically employ elaborate reasoning strategies regardless of question complexity, leading to low adaptability. Recent efficient test-time scaling methods introduce budget constraints or early stop mechanisms to avoid overthinking for straightforward questions. But they add human bias to the reasoning process and fail to leverage models' inherent reasoning capabilities. To address these limitations, we present T$^2$: Think-to-Think, a novel framework that dynamically adapts reasoning depth based on question complexity. T$^2$ leverages the insight that if an LLM can effectively solve similar questions using specific reasoning strategies, it can apply the same strategy to the original question. This insight enables to adoption of concise reasoning for straightforward questions while maintaining detailed analysis for complex problems. T$^2$ works through four key steps: decomposing questions into structural elements, generating similar examples with candidate reasoning strategies, evaluating these strategies against multiple criteria, and applying the most appropriate strategy to the original question. Experimental evaluation across seven diverse CQA benchmarks demonstrates that T$^2$ not only achieves higher accuracy than baseline methods but also reduces computational overhead by up to 25.2%.
Problem

Research questions and friction points this paper is trying to address.

Adapts reasoning depth based on question complexity
Avoids human bias in reasoning strategies
Reduces computational overhead while improving accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic reasoning depth adaptation
Similar example strategy evaluation
Reduced computational overhead significantly
🔎 Similar Papers
No similar papers found.
Zhengyi Zhao
Zhengyi Zhao
The Chinese University of Hong Kong
Natural Language ProcessMachine LearningInformation Extraction
S
Shubo Zhang
University of International Relations
Zezhong Wang
Zezhong Wang
Institute of Science Tokyo
VLSI physical design
H
Huimin Wang
Jarvis Research Center, Tencent YouTu Lab
Y
Yutian Zhao
Jarvis Research Center, Tencent YouTu Lab
B
Bin Liang
The Chinese University of Hong Kong
Yefeng Zheng
Yefeng Zheng
Professor, Westlake University, Hangzhou, China, IEEE Fellow, AIMBE Fellow
AI in HealthMedical ImagingComputer VisionNatural Language ProcessingLarge Language Model
B
Binyang Li
University of International Relations
K
Kam-Fai Wong
The Chinese University of Hong Kong
X
Xian Wu
Jarvis Research Center, Tencent YouTu Lab