🤖 AI Summary
This work addresses the significant drop in robustness of existing mathematical reasoning models when confronted with semantically relevant but computationally irrelevant distractors, a problem exacerbated in low-resource language settings. The authors propose reframing mathematical reasoning as a task of generating executable computation graphs and introduce, for the first time, a distractor-aware structured intermediate representation that explicitly models distractor nodes. Leveraging the Gemma-3 architecture and combining supervised fine-tuning with Group Relative Policy Optimization, the model achieves strong robustness against distractors without relying on distractor-augmented training data. On the DISTRACTMATH-BN benchmark, it attains weighted accuracy comparable to specialized reasoning models while reducing inference token consumption by 89%, substantially improving both efficiency and robustness.
📝 Abstract
Chain-of-Thought (CoT) prompting is widely adopted for mathematical problem solving, including in low-resource languages, yet its behavior under irrelevant context remains underexplored. To systematically study this challenge, we introduce DISTRACTMATH-BN, a Bangla benchmark that augments MGSM and MSVAMP with semantically coherent but computationally irrelevant information. Evaluating seven models ranging from 3B to 12B parameters, we observe substantial performance degradation under distractors: standard models drop by up to 41 points, while reasoning-specialized models decline by 14 to 20 points despite consuming five times more tokens. We propose {\dag}DAGGER, which reformulates mathematical problem solving as executable computational graph generation with explicit modeling of distractor nodes. Fine-tuning Gemma-3 models using supervised fine-tuning followed by Group Relative Policy Optimization achieves comparable weighted accuracy on augmented benchmarks while using 89 percent fewer tokens than reasoning models. Importantly, this robustness emerges without explicit training on distractor-augmented examples. Our results suggest that enforcing structured intermediate representations improves robustness and inference efficiency in mathematical reasoning compared to free-form approaches, particularly in noisy, low-resource settings.