🤖 AI Summary
Existing mathematical reasoning methods rely either on Chain-of-Thought (CoT) prompting to improve generalization or Tool-Integrated Reasoning (TIR) to ensure accuracy, yet they lack the capability for large language models (LLMs) to autonomously adapt their reasoning strategy based on inherent capabilities. Method: We propose TATA, the first framework enabling LLMs to dynamically select between CoT and TIR paths according to self-assessed competence. It introduces a capability-aware data filtering mechanism to construct personalized training sets during supervised fine-tuning and enables zero-shot, rule-free path selection at inference time. Contribution/Results: Evaluated on six mathematical reasoning benchmarks, TATA significantly outperforms both standalone TIR and baseline CoT approaches, achieving superior trade-offs between accuracy and reasoning efficiency. These results empirically validate the effectiveness of “capability-aligned” adaptive reasoning as a novel paradigm.
📝 Abstract
Existing approaches to mathematical reasoning with large language models (LLMs) rely on Chain-of-Thought (CoT) for generalizability or Tool-Integrated Reasoning (TIR) for precise computation. While efforts have been made to combine these methods, they primarily rely on post-selection or predefined strategies, leaving an open question: whether LLMs can autonomously adapt their reasoning strategy based on their inherent capabilities. In this work, we propose TATA (Teaching LLMs According to Their Aptitude), an adaptive framework that enables LLMs to personalize their reasoning strategy spontaneously, aligning it with their intrinsic aptitude. TATA incorporates base-LLM-aware data selection during supervised fine-tuning (SFT) to tailor training data to the model's unique abilities. This approach equips LLMs to autonomously determine and apply the appropriate reasoning strategy at test time. We evaluate TATA through extensive experiments on six mathematical reasoning benchmarks, using both general-purpose and math-specialized LLMs. Empirical results demonstrate that TATA effectively combines the complementary strengths of CoT and TIR, achieving superior or comparable performance with improved inference efficiency compared to TIR alone. Further analysis underscores the critical role of aptitude-aware data selection in enabling LLMs to make effective and adaptive reasoning decisions and align reasoning strategies with model capabilities.