🤖 AI Summary
Embodied search under ambiguous natural-language instructions (e.g., “fetch a tool”) poses a fundamental trade-off between physical exploration cost and cognitive cost of human interaction; existing MLLM-based agents treat ambiguity resolution as passive perception, failing to jointly optimize these heterogeneous costs. Method: We propose HC-GRPO, a reinforcement learning algorithm that— for the first time in MLLM policy optimization—explicitly models heterogeneous costs (e.g., navigation time, human attention demand) and unifies dialogue questioning, memory retrieval, and navigation actions into a single joint decision-making process. Leveraging a multimodal large language model and the AI2-THOR simulator, HC-GRPO minimizes total task cost via grouped trajectory sampling and relative advantage-based updates. Results: Experiments demonstrate significant improvements in task success rate and a ~50% reduction in total operational cost, validating the effectiveness of cost-aware training for grounding embodied agents in the physical world.
📝 Abstract
Multimodal Large Language Models (MLLMs) have empowered embodied agents with remarkable capabilities in planning and reasoning. However, when facing ambiguous natural language instructions (e.g., "fetch the tool" in a cluttered room), current agents often fail to balance the high cost of physical exploration against the cognitive cost of human interaction. They typically treat disambiguation as a passive perception problem, lacking the strategic reasoning to minimize total task execution costs. To bridge this gap, we propose ESearch-R1, a cost-aware embodied reasoning framework that unifies interactive dialogue (Ask), episodic memory retrieval (GetMemory), and physical navigation (Navigate) into a single decision process. We introduce HC-GRPO (Heterogeneous Cost-Aware Group Relative Policy Optimization). Unlike traditional PPO which relies on a separate value critic, HC-GRPO optimizes the MLLM by sampling groups of reasoning trajectories and reinforcing those that achieve the optimal trade-off between information gain and heterogeneous costs (e.g., navigate time, and human attention). Extensive experiments in AI2-THOR demonstrate that ESearch-R1 significantly outperforms standard ReAct-based agents. It improves task success rates while reducing total operational costs by approximately 50%, validating the effectiveness of GRPO in aligning MLLM agents with physical world constraints.