🤖 AI Summary
This work addresses the challenge of multi-hop reasoning over extremely long documents, where critical evidence is sparsely scattered and difficult to retain effectively under limited memory constraints. To this end, the authors propose InfMem, a control-centric agent that employs a PreThink-Retrieve-Write protocol to actively assess evidence sufficiency, perform targeted retrieval, and update its bounded memory via an evidence-aware joint compression strategy. The key innovation lies in integrating a System-2-like active memory control mechanism with an SFT-to-RL training pipeline, aligning retrieval, memory writing, and early-stopping decisions with the ultimate task objective. Evaluated on question-answering benchmarks spanning 32k to 1M tokens, InfMem substantially outperforms MemAgent, achieving average accuracy gains of 8.23–11.84 percentage points across multiple Qwen models and accelerating inference by 3.9×.
📝 Abstract
Reasoning over ultra-long documents requires synthesizing sparse evidence scattered across distant segments under strict memory constraints. While streaming agents enable scalable processing, their passive memory update strategy often fails to preserve low-salience bridging evidence required for multi-hop reasoning. We propose InfMem, a control-centric agent that instantiates System-2-style control via a PreThink-Retrieve-Write protocol. InfMem actively monitors evidence sufficiency, performs targeted in-document retrieval, and applies evidence-aware joint compression to update a bounded memory. To ensure reliable control, we introduce a practical SFT-to-RL training recipe that aligns retrieval, writing, and stopping decisions with end-task correctness. On ultra-long QA benchmarks from 32k to 1M tokens, InfMem consistently outperforms MemAgent across backbones. Specifically, InfMem improves average absolute accuracy by +10.17, +11.84, and +8.23 points on Qwen3-1.7B, Qwen3-4B, and Qwen2.5-7B, respectively, while reducing inference time by $3.9\times$ on average (up to $5.1\times$) via adaptive early stopping.