CaRT: Teaching LLM Agents to Know When They Know Enough

πŸ“… 2025-10-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses the challenge that large language model (LLM) agents struggle to autonomously determine optimal termination points in multi-turn information-gathering tasks, often leading to over-reasoning or goal drift. To tackle this, we propose CaRT (Counterfactuals and Reasoning for Termination), a novel framework that constructs minimally divergent counterfactual trajectory pairs and employs causal-aware language reasoning fine-tuning to explicitly teach LLMs the decision criteria for β€œwhen to stop gathering information.” Unlike opaque, black-box termination signals, CaRT models termination as an interpretable counterfactual reasoning process. Evaluated on interactive medical diagnosis and mathematical problem-solving tasks, CaRT achieves significant improvements: +12.3% task success rate and an average reduction of 2.8 interaction turns, outperforming existing reward-modeling- or supervised-fine-tuning-based termination mechanisms. To our knowledge, CaRT is the first approach to enable language-guided termination learning that simultaneously ensures interpretability and cross-task generalizability.

Technology Category

Application Category

πŸ“ Abstract
Many tasks require learned models to strategically gather relevant information over multiple rounds of interaction before actually acting on a task. Strategic information gathering requires models to know not only how to effectively acquire information, but also when to stop gathering information and make a decision, in order to avoid overthinking or getting derailed when acting. In this paper, we formalize this problem and introduce Counterfactuals and Reasoning for Termination (CaRT), an approach for teaching LLMs when to stop seeking information. To appropriately learn when to terminate, CaRT fine-tunes LLMs using counterfactual pairs of trajectories, one where termination is appropriate and a minimally modified version of the same trajectory where it is not. It trains the LLM to explain the rationale for the termination decision in either case via verbal reasoning, and imbues this capability into the base LLM via fine-tuning. We instantiate CaRT in two domains: interactive medical diagnosis and math problem solving. In both domains, we find that CaRT improves the efficiency of information gathering and task success rate compared to other fine-tuning methods.
Problem

Research questions and friction points this paper is trying to address.

Teaching LLM agents when to stop information gathering
Avoiding overthinking by strategic termination of information seeking
Improving task efficiency through counterfactual reasoning training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trains LLMs with counterfactual trajectory pairs
Teaches termination via verbal reasoning explanations
Fine-tunes models for strategic information gathering
πŸ”Ž Similar Papers
No similar papers found.