🤖 AI Summary
Existing research lacks a systematic definition of efficiency for LLM-based agents, leading to high interaction costs—such as redundant tokens and excessive action steps—in complex reasoning due to lengthy chains of thought. This work introduces the first dual-dimensional formalization of agent efficiency: step-level conciseness (minimizing token usage per response) and trajectory-level action economy (reducing total action steps per task). We propose a joint preference optimization framework that simultaneously optimizes both dimensions via contrastive learning, enabling efficient training with minimal human-labeled data. Evaluated on WebShop and BabyAI benchmarks, our method reduces token consumption by 60.9% and action steps by 26.9%, while improving task success rate by 29.3%. Moreover, the approach demonstrates strong generalization across unseen environments and tasks.
📝 Abstract
Recent advances in large language models (LLMs) have greatly improved their reasoning and decision-making abilities when deployed as agents. Richer reasoning, however, often comes at the cost of longer chain of thought (CoT), hampering interaction efficiency in real-world scenarios. Nevertheless, there still lacks systematic definition of LLM agent efficiency, hindering targeted improvements. To this end, we introduce dual-efficiency, comprising (i) step-level efficiency, which minimizes tokens per step, and (ii) trajectory-level efficiency, which minimizes the number of steps to complete a task. Building on this definition, we propose DEPO, a dual-efficiency preference optimization method that jointly rewards succinct responses and fewer action steps. Experiments on WebShop and BabyAI show that DEPO cuts token usage by up to 60.9% and steps by up to 26.9%, while achieving up to a 29.3% improvement in performance. DEPO also generalizes to three out-of-domain math benchmarks and retains its efficiency gains when trained on only 25% of the data. Our project page is at https://opencausalab.github.io/DEPO.