HEAL: An Empirical Study on Hallucinations in Embodied Agents Driven by Large Language Models

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a systemic hallucination problem when large language models (LLMs) serve as the cognitive core of embodied agents: due to scenario-task misalignment, LLMs disregard physical observations and blindly execute infeasible instructions (e.g., “open a non-existent refrigerator”), leading to failure in long-horizon navigation. To address this, the authors introduce the first hallucination taxonomy and reproducible probing benchmark for embodied AI, constructing adversarial instruction-scenario mismatch samples in AI2-Thor and Habitat. They conduct zero-shot and fine-tuned evaluations across 12 state-of-the-art LLMs, complemented by behavioral attribution analysis. Results show hallucination triggers increase up to 40×; critically, no model reliably detects or rejects unreasonable tasks—demonstrating a fundamental disconnect between LLM reasoning and environmental grounding. This work establishes a new paradigm and benchmark for robustness evaluation and grounding enhancement of embodied LLMs.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly being adopted as the cognitive core of embodied agents. However, inherited hallucinations, which stem from failures to ground user instructions in the observed physical environment, can lead to navigation errors, such as searching for a refrigerator that does not exist. In this paper, we present the first systematic study of hallucinations in LLM-based embodied agents performing long-horizon tasks under scene-task inconsistencies. Our goal is to understand to what extent hallucinations occur, what types of inconsistencies trigger them, and how current models respond. To achieve these goals, we construct a hallucination probing set by building on an existing benchmark, capable of inducing hallucination rates up to 40x higher than base prompts. Evaluating 12 models across two simulation environments, we find that while models exhibit reasoning, they fail to resolve scene-task inconsistencies-highlighting fundamental limitations in handling infeasible tasks. We also provide actionable insights on ideal model behavior for each scenario, offering guidance for developing more robust and reliable planning strategies.
Problem

Research questions and friction points this paper is trying to address.

Study hallucinations in LLM-based embodied agents during tasks
Identify scene-task inconsistencies triggering high hallucination rates
Assess model limitations in handling infeasible navigation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic study of hallucinations in LLM-based agents
Construct hallucination probing set for evaluation
Provide actionable insights for robust planning strategies
🔎 Similar Papers
No similar papers found.