RoboInspector: Unveiling the Unreliability of Policy Code for LLM-enabled Robotic Manipulation

📅 2025-08-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit unreliable policy code generation for robotic manipulation due to task diversity, heterogeneous user instructions, and misaligned instruction granularity. Method: This paper introduces RoboInspector, the first analytical framework that systematically attributes failures along two orthogonal dimensions—task complexity and instruction granularity—identifying four canonical failure patterns. It further proposes a failure-feedback-driven iterative optimization mechanism integrating execution monitoring, multi-configuration experimentation, and targeted code refinement. Contribution/Results: Evaluated across 168 task–instruction–model combinations on two mainstream robotic frameworks in both simulation and real-world settings, RoboInspector improves policy code generation reliability by up to 35%, significantly enhancing the robustness and practicality of LLM-driven robotic manipulation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) demonstrate remarkable capabilities in reasoning and code generation, enabling robotic manipulation to be initiated with just a single instruction. The LLM carries out various tasks by generating policy code required to control the robot. Despite advances in LLMs, achieving reliable policy code generation remains a significant challenge due to the diverse requirements of real-world tasks and the inherent complexity of user instructions. In practice, different users may provide distinct instructions to drive the robot for the same task, which may cause the unreliability of policy code generation. To bridge this gap, we design RoboInspector, a pipeline to unveil and characterize the unreliability of the policy code for LLM-enabled robotic manipulation from two perspectives: the complexity of the manipulation task and the granularity of the instruction. We perform comprehensive experiments with 168 distinct combinations of tasks, instructions, and LLMs in two prominent frameworks. The RoboInspector identifies four main unreliable behaviors that lead to manipulation failure. We provide a detailed characterization of these behaviors and their underlying causes, giving insight for practical development to reduce unreliability. Furthermore, we introduce a refinement approach guided by failure policy code feedback that improves the reliability of policy code generation by up to 35% in LLM-enabled robotic manipulation, evaluated in both simulation and real-world environments.
Problem

Research questions and friction points this paper is trying to address.

Unreliability of LLM-generated robotic policy code
Diverse user instructions causing manipulation failures
Complex task requirements challenging code generation reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pipeline for unveiling policy code unreliability
Characterizes unreliability from task complexity and instruction granularity
Refinement approach using failure feedback improves reliability
🔎 Similar Papers
No similar papers found.