🤖 AI Summary
This study addresses the robustness of large language models (LLMs) in non-ideal inference conditions—specifically, abstractive reasoning, fine-grained noise suppression, and context filtering—scenarios often overlooked by existing benchmarks that assume clean inputs and neglect the degradation of reinforcement learning fine-tuning (RLFT) performance under realism.
Method: We propose the novel conceptual framework of “reasoning robustness under non-ideal conditions,” grounded in neuroscientific principles, and design an evaluation paradigm accordingly. Using policy gradient algorithms, we conduct RLFT on diverse LLMs and vision-language models across eight public benchmarks.
Results: Empirical results demonstrate that while RLFT improves performance under ideal conditions, it fails to generalize to realistic settings involving noise, redundancy, or information loss—revealing intrinsic robustness deficiencies. These findings provide both theoretical grounding and empirical evidence for rethinking robust reasoning modeling and reconstructing evaluation standards.
📝 Abstract
Reinforcement learning (RL) has become a key technique for enhancing the reasoning abilities of large language models (LLMs), with policy-gradient algorithms dominating the post-training stage because of their efficiency and effectiveness. However, most existing benchmarks evaluate large-language-model reasoning under idealized settings, overlooking performance in realistic, non-ideal scenarios. We identify three representative non-ideal scenarios with practical relevance: summary inference, fine-grained noise suppression, and contextual filtering. We introduce a new research direction guided by brain-science findings that human reasoning remains reliable under imperfect inputs. We formally define and evaluate these challenging scenarios. We fine-tune three LLMs and a state-of-the-art large vision-language model (LVLM) using RL with a representative policy-gradient algorithm and then test their performance on eight public datasets. Our results reveal that while RL fine-tuning improves baseline reasoning under idealized settings, performance declines significantly across all three non-ideal scenarios, exposing critical limitations in advanced reasoning capabilities. Although we propose a scenario-specific remediation method, our results suggest current methods leave these reasoning deficits largely unresolved. This work highlights that the reasoning abilities of large models are often overstated and underscores the importance of evaluating models under non-ideal scenarios. The code and data will be released at XXXX.