Adaptive Stress Testing Black-Box LLM Planners

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Black-box large language model (LLM) planners deployed in safety-critical scenarios often exhibit hallucination, leading to unreliable planning outputs. Method: This paper proposes a prompt perturbation space exploration method that synergistically integrates Adaptive Stress Testing (AST) with Monte Carlo Tree Search (MCTS). Unlike conventional paradigms relying on prompt re-ranking or adversarial perturbations, our approach systematically models diverse perturbations—including noise injection and sensor information ablation—and guides search toward input scenarios that maximize LLM uncertainty and induce planning failure. Contribution/Results: The resulting transferable perturbation tree enables offline modeling and runtime automatic prompt generation, while supporting fine-grained, real-time trust quantification. Evaluated on autonomous driving planning tasks, the method significantly improves hallucination triggering rate and uncertainty identification accuracy, thereby enhancing planner reliability and interpretability in safety-critical applications.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have recently demonstrated success in generalizing across decision-making tasks including planning, control and prediction, but their tendency to hallucinate unsafe and undesired outputs poses risks. We argue that detecting such failures is necessary, especially in safety-critical scenarios. Existing black-box methods often detect hallucinations by identifying inconsistencies across multiple samples. Many of these approaches typically introduce prompt perturbations like randomizing detail order or generating adversarial inputs, with the intuition that a confident model should produce stable outputs. We first perform a manual case study showing that other forms of perturbations (e.g., adding noise, removing sensor details) cause LLMs to hallucinate in a driving environment. We then propose a novel method for efficiently searching the space of prompt perturbations using Adaptive Stress Testing (AST) with Monte-Carlo Tree Search (MCTS). Our AST formulation enables discovery of scenarios and prompts that cause language models to act with high uncertainty. By generating MCTS prompt perturbation trees across diverse scenarios, we show that offline analyses can be used at runtime to automatically generate prompts that influence model uncertainty, and to inform real-time trust assessments of an LLM.
Problem

Research questions and friction points this paper is trying to address.

Detect unsafe hallucinations in black-box LLM planners
Identify prompt perturbations causing high model uncertainty
Develop adaptive stress testing for real-time trust assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Stress Testing with MCTS
Prompt perturbation trees generation
Real-time trust assessment automation
🔎 Similar Papers
No similar papers found.