🤖 AI Summary
Task planning in language-driven robotics suffers from frequent failures, while small-scale LLMs exhibit poor grounding and low execution robustness in physical environments.
Method: We propose a modular architecture grounded in goal-conditioned POMDP modeling and modular decoupling. It introduces a novel “Expected Outcome” module to suppress misrepresentation of subgoals and integrates runtime state feedback for closed-loop online error recovery.
Contribution/Results: We demonstrate, for the first time, the feasibility and efficiency of deploying lightweight LLMs directly on real robotic platforms. Experiments on simulated and real-world pick-and-place tasks show that our approach significantly outperforms large-model baselines and standard methods in success rate, while achieving low latency (<200 ms) and minimal resource consumption (CPU memory <1.2 GB).
📝 Abstract
Recent advances in large language models (LLMs) have led to significant progress in robotics, enabling embodied agents to better understand and execute open-ended tasks. However, existing approaches using LLMs face limitations in grounding their outputs within the physical environment and aligning with the capabilities of the robot. This challenge becomes even more pronounced with smaller language models, which are more computationally efficient but less robust in task planning and execution. In this paper, we present a novel modular architecture designed to enhance the robustness of LLM-driven robotics by addressing these grounding and alignment issues. We formalize the task planning problem within a goal-conditioned POMDP framework, identify key failure modes in LLM-driven planning, and propose targeted design principles to mitigate these issues. Our architecture introduces an ``expected outcomes'' module to prevent mischaracterization of subgoals and a feedback mechanism to enable real-time error recovery. Experimental results, both in simulation and on physical robots, demonstrate that our approach significantly improves task success rates for pick-and-place and manipulation tasks compared to both larger LLMs and standard baselines. Through hardware experiments, we also demonstrate how our architecture can be run efficiently and locally. This work highlights the potential of smaller, locally-executable LLMs in robotics and provides a scalable, efficient solution for robust task execution.