🤖 AI Summary
Existing LLM-based agents often suffer from premature commitment to seemingly plausible yet suboptimal actions, primarily due to the absence of active evaluation and comparative reasoning over candidate actions. Method: We propose SAND, the first framework enabling agents to autonomously generate step-level “action deliberation” trajectories in large action spaces. SAND employs self-consistent sampling to generate diverse candidate actions and integrates an execution-feedback-driven critique mechanism for iterative refinement and selection. Crucially, all deliberation traces are generated end-to-end by the base LLM itself—requiring no external annotations or auxiliary supervision signals. Contribution/Results: Evaluated on two representative interactive task domains, SAND achieves average performance gains exceeding 20%, significantly outperforming both supervised fine-tuning and state-of-the-art agent optimization methods. It establishes a new paradigm for robust, self-reflective decision-making in LLM agents.
📝 Abstract
Large Language Model (LLM) agents are commonly tuned with supervised finetuning on ReAct-style expert trajectories or preference optimization over pairwise rollouts. Most of these methods focus on imitating specific expert behaviors or promoting chosen reasoning thoughts and actions over rejected ones. However, without reasoning and comparing over alternatives actions, LLM agents finetuned with these methods may over-commit towards seemingly plausible but suboptimal actions due to limited action space exploration. To address this, in this paper we propose Self-taught ActioN Deliberation (SAND) framework, enabling LLM agents to explicitly deliberate over candidate actions before committing to one. To tackle the challenges of when and what to deliberate given large action space and step-level action evaluation, we incorporate self-consistency action sampling and execution-guided action critique to help synthesize step-wise action deliberation thoughts using the base model of the LLM agent. In an iterative manner, the deliberation trajectories are then used to finetune the LLM agent itself. Evaluating on two representative interactive agent tasks, SAND achieves an average 20% improvement over initial supervised finetuning and also outperforms state-of-the-art agent tuning approaches.