From Assumptions to Actions: Turning LLM Reasoning into Uncertainty-Aware Planning for Embodied Agents

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high communication overhead and interference inherent in existing approaches to multi-agent coordination in partially observable, decentralized environments, where frequent message exchange is typically required. To mitigate this limitation, the authors propose the Planner-Composer-Evaluator (PCE) framework, which explicitly models the implicit assumptions underlying large language model (LLM) reasoning as structured decision trees. By incorporating multi-dimensional scoring—assessing likelihood, reward, and cost—PCE enables uncertainty-aware, rational planning while substantially reducing communication demands. Experimental results demonstrate that PCE outperforms communication-intensive baselines on the C-WAH and TDW-MAT benchmarks, achieving significant gains in both task success rate and efficiency. These improvements are orthogonal to model scale and reasoning depth, and user studies further confirm that PCE yields more efficient and trustworthy human-agent interactions.

Technology Category

Application Category

📝 Abstract
Embodied agents operating in multi-agent, partially observable, and decentralized environments must plan and act despite pervasive uncertainty about hidden objects and collaborators'intentions. Recent advances in applying Large Language Models (LLMs) to embodied agents have addressed many long-standing challenges, such as high-level goal decomposition and online adaptation. Yet, uncertainty is still primarily mitigated through frequent inter-agent communication. This incurs substantial token and time costs, and can disrupt established workflows, when human partners are involved. We introduce PCE, a Planner-Composer-Evaluator framework that converts the fragmented assumptions latent in LLM reasoning traces into a structured decision tree. Internal nodes encode environment assumptions and leaves map to actions; each path is then scored by scenario likelihood, goal-directed gain, and execution cost to guide rational action selection without heavy communication. Across two challenging multi-agent benchmarks (C-WAH and TDW-MAT) and three diverse LLM backbones, PCE consistently outperforms communication-centric baselines in success rate and task efficiency while showing comparable token usage. Ablation results indicate that the performance gains obtained by scaling model capacity or reasoning depth persist even when PCE is applied, while PCE consistently raises the baseline across both capacity and reasoning-depth scales, confirming that structured uncertainty handling complements both forms of scaling. A user study further demonstrates that PCE produces communication patterns that human partners perceive as more efficient and trustworthy. Together, these results establish a principled route for turning latent LLM assumptions into reliable strategies for uncertainty-aware planning.
Problem

Research questions and friction points this paper is trying to address.

embodied agents
uncertainty-aware planning
multi-agent systems
partial observability
LLM reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

uncertainty-aware planning
structured decision tree
LLM reasoning trace
communication-efficient coordination
embodied agents
🔎 Similar Papers
No similar papers found.
S
Seung-hun Seo
Department of Computer Science and Engineering, Korea University
S
Soobin Lim
Department of Computer Science and Engineering, Korea University
S
SeongRae Noh
Department of Computer Science and Engineering, Korea University
H
Haneul Kim
Department of Computer Science and Engineering, Korea University
HyeongYeop Kang
HyeongYeop Kang
Assistant Professor of Korea University
Neural Computer GraphicsExtended RealityArtificial IntelligenceHuman-computer Interaction