🤖 AI Summary
This work addresses the performance degradation of large language models on long-context tasks, which stems from uncontrolled memory expansion and the absence of a principled mechanism for terminating reasoning. To overcome these limitations, the authors propose GRU-Mem, a novel framework that integrates a text-controlled dual-gate mechanism—comprising an update gate and an exit gate—with end-to-end reinforcement learning. This design enables selective memory updating and timely termination of inference during sequential chunk-wise processing. The approach is guided by two tailored reward signals: an update reward and an exit reward. Evaluated across diverse long-context reasoning benchmarks, GRU-Mem significantly outperforms existing baselines, achieving up to a 4× speedup in inference while maintaining or even improving accuracy.
📝 Abstract
While reasoning over long context is crucial for various real-world applications, it remains challenging for large language models (LLMs) as they suffer from performance degradation as the context length grows. Recent work MemAgent has tried to tackle this by processing context chunk-by-chunk in an RNN-like loop and updating a textual memory for final answering. However, this naive recurrent memory update faces two crucial drawbacks: (i) memory can quickly explode because it can update indiscriminately, even on evidence-free chunks; and (ii) the loop lacks an exit mechanism, leading to unnecessary computation after even sufficient evidence is collected. To address these issues, we propose GRU-Mem, which incorporates two text-controlled gates for more stable and efficient long-context reasoning. Specifically, in GRU-Mem, the memory only updates when the update gate is open and the recurrent loop will exit immediately once the exit gate is open. To endow the model with such capabilities, we introduce two reward signals $r^{\text{update}}$ and $r^{\text{exit}}$ within end-to-end RL, rewarding the correct updating and exiting behaviors respectively. Experiments on various long-context reasoning tasks demonstrate the effectiveness and efficiency of GRU-Mem, which generally outperforms the vanilla MemAgent with up to 400\% times inference speed acceleration.