🤖 AI Summary
Large language model (LLM) agents often exhibit overconfidence or inefficient intervention due to a lack of metacognitive capabilities—such as self-regulation, awareness of limitations, and timely求助 (e.g., querying stronger models or external tools).
Method: We propose an offline intervention decision framework tailored to a constrained intervention budget $C$, integrating an LLM-based process reward model (PRM) with tabular reinforcement learning. By offline annotating agent trajectories and performing policy distillation, we train a lightweight “assistant” policy that precisely determines when to invoke more capable models or additional computational resources.
Contribution/Results: Our approach significantly reduces costly interventions during training while achieving near-optimal intervention performance across multiple tasks. It balances robustness, training efficiency, and decision optimality—outperforming baselines in both intervention accuracy and resource utilization under budget constraints.
📝 Abstract
Human intelligence involves metacognitive abilities like self-regulation, recognizing limitations, and seeking assistance only when needed. While LLM Agents excel in many domains, they often lack this awareness. Overconfident agents risk catastrophic failures, while those that seek help excessively hinder efficiency. A key challenge is enabling agents with a limited intervention budget $C$ is to decide when to request assistance. In this paper, we propose an offline framework that trains a"helper"policy to request interventions, such as more powerful models or test-time compute, by combining LLM-based process reward models (PRMs) with tabular reinforcement learning. Using state transitions collected offline, we score optimal intervention timing with PRMs and train the helper model on these labeled trajectories. This offline approach significantly reduces costly intervention calls during training. Furthermore, the integration of PRMs with tabular RL enhances robustness to off-policy data while avoiding the inefficiencies of deep RL. We empirically find that our method delivers optimal helper behavior.