🤖 AI Summary
This work addresses the challenges posed by the exponential growth of scientific literature and the inefficiency of existing large language model–based paper-reading agents, which often suffer from redundant exploration and poor planning. To this end, we propose PaperCompass, a novel framework that decouples high-level planning from fine-grained execution by first generating an abstract action sequence and then progressively instantiating function call parameters. Inspired by cognitive science, we introduce DFPO, a lightweight hierarchical reinforcement learning algorithm, together with a new Draft-and-Follow policy optimization strategy that effectively bridges the “knowing-doing gap” in smaller models. Evaluated on the Paper-QA benchmark, our approach achieves significantly improved reasoning efficiency, matching the performance of substantially larger models while maintaining stable and reliable training dynamics.
📝 Abstract
The accelerating growth of the scientific literature makes it increasingly difficult for researchers to track new advances through manual reading alone. Recent progress in large language models (LLMs) has therefore spurred interest in autonomous agents that can read scientific papers and extract task-relevant information. However, most existing approaches rely either on heavily engineered prompting or on a conventional SFT-RL training pipeline, both of which often lead to excessive and low-yield exploration. Drawing inspiration from cognitive science, we propose PaperCompass, a framework that mitigates these issues by separating high-level planning from fine-grained execution. PaperCompass first drafts an explicit plan that outlines the intended sequence of actions, and then performs detailed reasoning to instantiate each step by selecting the parameters for the corresponding function calls. To train such behavior, we introduce Draft-and-Follow Policy Optimization (DFPO), a tailored RL method that jointly optimizes both the draft plan and the final solution. DFPO can be viewed as a lightweight form of hierarchical reinforcement learning, aimed at narrowing the `knowing-doing'gap in LLMs. We provide a theoretical analysis that establishes DFPO's favorable optimization properties, supporting a stable and reliable training process. Experiments on paper-based question answering (Paper-QA) benchmarks show that PaperCompass improves efficiency over strong baselines without sacrificing performance, achieving results comparable to much larger models.