ESearch-R1: Learning Cost-Aware MLLM Agents for Interactive Embodied Search via Reinforcement Learning

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Embodied search under ambiguous natural-language instructions (e.g., “fetch a tool”) poses a fundamental trade-off between physical exploration cost and cognitive cost of human interaction; existing MLLM-based agents treat ambiguity resolution as passive perception, failing to jointly optimize these heterogeneous costs. Method: We propose HC-GRPO, a reinforcement learning algorithm that— for the first time in MLLM policy optimization—explicitly models heterogeneous costs (e.g., navigation time, human attention demand) and unifies dialogue questioning, memory retrieval, and navigation actions into a single joint decision-making process. Leveraging a multimodal large language model and the AI2-THOR simulator, HC-GRPO minimizes total task cost via grouped trajectory sampling and relative advantage-based updates. Results: Experiments demonstrate significant improvements in task success rate and a ~50% reduction in total operational cost, validating the effectiveness of cost-aware training for grounding embodied agents in the physical world.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have empowered embodied agents with remarkable capabilities in planning and reasoning. However, when facing ambiguous natural language instructions (e.g., "fetch the tool" in a cluttered room), current agents often fail to balance the high cost of physical exploration against the cognitive cost of human interaction. They typically treat disambiguation as a passive perception problem, lacking the strategic reasoning to minimize total task execution costs. To bridge this gap, we propose ESearch-R1, a cost-aware embodied reasoning framework that unifies interactive dialogue (Ask), episodic memory retrieval (GetMemory), and physical navigation (Navigate) into a single decision process. We introduce HC-GRPO (Heterogeneous Cost-Aware Group Relative Policy Optimization). Unlike traditional PPO which relies on a separate value critic, HC-GRPO optimizes the MLLM by sampling groups of reasoning trajectories and reinforcing those that achieve the optimal trade-off between information gain and heterogeneous costs (e.g., navigate time, and human attention). Extensive experiments in AI2-THOR demonstrate that ESearch-R1 significantly outperforms standard ReAct-based agents. It improves task success rates while reducing total operational costs by approximately 50%, validating the effectiveness of GRPO in aligning MLLM agents with physical world constraints.
Problem

Research questions and friction points this paper is trying to address.

Balancing physical exploration costs with human interaction costs
Strategic reasoning to minimize total task execution costs
Optimizing trade-offs between information gain and heterogeneous costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies interactive dialogue, memory retrieval, navigation into single decision process
Uses HC-GRPO to optimize trade-off between information gain and costs
Reinforces reasoning trajectories that minimize heterogeneous operational expenses
🔎 Similar Papers
No similar papers found.
W
Weijie Zhou
School of Traffic and Transportation, Beijing Jiaotong University, Beijing, China
X
Xuangtang Xiong
Tencent Robotics X & Futian Laboratory, Shenzhen, China
Y
Ye Tian
Tencent Robotics X & Futian Laboratory, Shenzhen, China
L
Lijun Yue
Tencent Robotics X & Futian Laboratory, Shenzhen, China
X
Xinyu Wu
University of Chinese Academy of Sciences, Beijing, China
W
Wei Li
School of Traffic and Transportation, Beijing Jiaotong University, Beijing, China
Chaoyang Zhao
Chaoyang Zhao
Institute of Automation, Chinese Academy of Sciences
computer vision
H
Honghui Dong
School of Traffic and Transportation, Beijing Jiaotong University, Beijing, China
M
Ming Tang
Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China
J
Jinqiao Wang
School of Traffic and Transportation, Beijing Jiaotong University, Beijing, China
Zhengyou Zhang
Zhengyou Zhang
Tencent AI Lab & Tencent Robotics X
Computer VisionMultimediaSpeechRoboticsAI