🤖 AI Summary
This work addresses the safety and verifiability challenges that arise when non-expert users instruct large language models (LLMs) to generate robot code via natural language, often leading to execution errors or hazardous behaviors. The authors propose RoboCritics, a novel framework that integrates expert knowledge–driven, motion-level critics into the LLM programming pipeline to provide real-time analysis and structured feedback on the execution trajectories of generated programs. This enables transparent user verification and one-click correction, establishing an interpretable and intervenable closed-loop optimization mechanism. User studies on the UR3e platform (n=18) demonstrate that RoboCritics significantly reduces safety violations, improves task execution quality, and enhances users’ verification and debugging behaviors compared to baseline approaches.
📝 Abstract
End-user robot programming grants users the flexibility to re-task robots in situ, yet it remains challenging for novices due to the need for specialized robotics knowledge. Large Language Models (LLMs) hold the potential to lower the barrier to robot programming by enabling task specification through natural language. However, current LLM-based approaches generate opaque, "black-box" code that is difficult to verify or debug, creating tangible safety and reliability risks in physical systems. We present RoboCritics, an approach that augments LLM-based robot programming with expert-informed motion-level critics. These critics encode robotics expertise to analyze motion-level execution traces for issues such as joint speed violations, collisions, and unsafe end-effector poses. When violations are detected, critics surface transparent feedback and offer one-click fixes that forward structured messages back to the LLM, enabling iterative refinement while keeping users in the loop. We instantiated RoboCritics in a web-based interface connected to a UR3e robot and evaluated it in a between-subjects user study (n=18). Compared to a baseline LLM interface, RoboCritics reduced safety violations, improved execution quality, and shaped how participants verified and refined their programs. Our findings demonstrate that RoboCritics enables more reliable and user-centered end-to-end robot programming with LLMs.