CAR-bench: Evaluating the Consistency and Limit-Awareness of LLM Agents under Real-World Uncertainty

📅 2026-01-29
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical gap in existing benchmarks for large language model (LLM) agents, which often neglect the uncertainty inherent in real-world user interactions and thus fail to assess agent reliability under ambiguous or incomplete requests. To this end, the authors propose the first multi-turn dialogue evaluation benchmark tailored for in-vehicle voice assistants, integrating LLM-simulated users, 58 domain-specific tools, and policy constraints. The benchmark introduces a Disambiguation task to evaluate an agent’s ability to proactively seek clarification and, innovatively, a Hallucination task to probe its awareness of operational boundaries when information is missing. Experimental results reveal that state-of-the-art models achieve less than 50% success rates on the Disambiguation task—often failing due to premature action—and frequently hallucinate or violate policy constraints in the Hallucination task, exposing significant reliability limitations in realistic interactive settings.

Technology Category

Application Category

📝 Abstract
Existing benchmarks for Large Language Model (LLM) agents focus on task completion under idealistic settings but overlook reliability in real-world, user-facing applications. In domains, such as in-car voice assistants, users often issue incomplete or ambiguous requests, creating intrinsic uncertainty that agents must manage through dialogue, tool use, and policy adherence. We introduce CAR-bench, a benchmark for evaluating consistency, uncertainty handling, and capability awareness in multi-turn, tool-using LLM agents in an in-car assistant domain. The environment features an LLM-simulated user, domain policies, and 58 interconnected tools spanning navigation, productivity, charging, and vehicle control. Beyond standard task completion, CAR-bench introduces Hallucination tasks that test agents'limit-awareness under missing tools or information, and Disambiguation tasks that require resolving uncertainty through clarification or internal information gathering. Baseline results reveal large gaps between occasional and consistent success on all task types. Even frontier reasoning LLMs achieve less than 50% consistent pass rate on Disambiguation tasks due to premature actions, and frequently violate policies or fabricate information to satisfy user requests in Hallucination tasks, underscoring the need for more reliable and self-aware LLM agents in real-world settings.
Problem

Research questions and friction points this paper is trying to address.

LLM agents
real-world uncertainty
limit-awareness
consistency
in-car assistant
Innovation

Methods, ideas, or system contributions that make the work stand out.

CAR-bench
limit-awareness
uncertainty handling
LLM agent evaluation
hallucination mitigation
🔎 Similar Papers
No similar papers found.
Johannes Kirmayr
Johannes Kirmayr
BMW Group, University of Augsburg
Large Language ModelsPlanning and ReasoningAI Agents
L
Lukas Stappen
BMW Group Research and Technology, Munich, Germany
E
Elisabeth André
Augsburg University, Augsburg, Germany