Evaluating Uncertainty-based Failure Detection for Closed-Loop LLM Planners

📅 2024-06-01
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from hallucination in closed-loop robotic task planning, leading to unreliable failure detection and system instability. Method: We propose KnowLoop, a framework built upon multimodal large language models (MLLMs), which systematically evaluates three uncertainty metrics—token-level probability, information entropy, and self-explanation confidence—for failure detection. KnowLoop introduces a model-agnostic, prompt-scalable failure detection paradigm that eliminates reliance on task-specific assumptions or strong trust in model outputs. Contribution/Results: Evaluated across three prompting strategies and a novel robotic manipulation dataset, token-level probability and entropy significantly outperform self-explanation-based confidence. With appropriately calibrated thresholds, KnowLoop achieves substantial improvements in failure detection accuracy, boosting closed-loop planning success rate and overall task reliability.

Technology Category

Application Category

📝 Abstract
Recently, Large Language Models (LLMs) have witnessed remarkable performance as zero-shot task planners for robotic manipulation tasks. However, the open-loop nature of previous works makes LLM-based planning error-prone and fragile. On the other hand, failure detection approaches for closed-loop planning are often limited by task-specific heuristics or following an unrealistic assumption that the prediction is trustworthy all the time. As a general-purpose reasoning machine, LLMs or Multimodal Large Language Models (MLLMs) are promising for detecting failures. However, However, the appropriateness of the aforementioned assumption diminishes due to the notorious hullucination problem. In this work, we attempt to mitigate these issues by introducing a framework for closed-loop LLM-based planning called KnowLoop, backed by an uncertainty-based MLLMs failure detector, which is agnostic to any used MLLMs or LLMs. Specifically, we evaluate three different ways for quantifying the uncertainty of MLLMs, namely token probability, entropy, and self-explained confidence as primary metrics based on three carefully designed representative prompting strategies. With a self-collected dataset including various manipulation tasks and an LLM-based robot system, our experiments demonstrate that token probability and entropy are more reflective compared to self-explained confidence. By setting an appropriate threshold to filter out uncertain predictions and seek human help actively, the accuracy of failure detection can be significantly enhanced. This improvement boosts the effectiveness of closed-loop planning and the overall success rate of tasks.
Problem

Research questions and friction points this paper is trying to address.

Detects failures in closed-loop LLM-based robotic planning
Evaluates uncertainty metrics for Multimodal Large Language Models
Enhances task success by filtering uncertain predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncertainty-based MLLMs failure detection framework
Token probability and entropy as uncertainty metrics
Active human help for uncertain predictions filtering
🔎 Similar Papers
No similar papers found.