From Language to Action: Can LLM-Based Agents Be Used for Embodied Robot Cognition?

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the feasibility of employing large language models (LLMs) as core cognitive controllers for embodied agents, aiming to bridge the gap between high-level linguistic instructions and low-level perception-action loops. To this end, the authors propose an LLM-driven cognitive architecture that integrates working memory and episodic memory, enabling a mobile manipulator to execute tasks in simulated household environments through high-level tool interfaces such as navigation, grasping, and placing. Experimental results demonstrate that the approach exhibits emergent reasoning and memory-guided adaptive planning capabilities in structured object manipulation tasks. This study provides the first systematic validation of LLMs’ potential in embodied intelligence, while also uncovering critical challenges including task hallucination and insufficient instruction following.

Technology Category

Application Category

📝 Abstract
In order to flexibly act in an everyday environment, a robotic agent needs a variety of cognitive capabilities that enable it to reason about plans and perform execution recovery. Large language models (LLMs) have been shown to demonstrate emergent cognitive aspects, such as reasoning and language understanding; however, the ability to control embodied robotic agents requires reliably bridging high-level language to low-level functionalities for perception and control. In this paper, we investigate the extent to which an LLM can serve as a core component for planning and execution reasoning in a cognitive robot architecture. For this purpose, we propose a cognitive architecture in which an agentic LLM serves as the core component for planning and reasoning, while components for working and episodic memories support learning from experience and adaptation. An instance of the architecture is then used to control a mobile manipulator in a simulated household environment, where environment interaction is done through a set of high-level tools for perception, reasoning, navigation, grasping, and placement, all of which are made available to the LLM-based agent. We evaluate our proposed system on two household tasks (object placement and object swapping), which evaluate the agent's reasoning, planning, and memory utilisation. The results demonstrate that the LLM-driven agent can complete structured tasks and exhibits emergent adaptation and memory-guided planning, but also reveal significant limitations, such as hallucinations about the task success and poor instruction following by refusing to acknowledge and complete sequential tasks. These findings highlight both the potential and challenges of employing LLMs as embodied cognitive controllers for autonomous robots.
Problem

Research questions and friction points this paper is trying to address.

embodied cognition
large language models
robotic agents
cognitive architecture
language-to-action
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based agent
embodied cognition
cognitive architecture
memory-guided planning
robotic reasoning
🔎 Similar Papers
No similar papers found.
S
Shinas Shaji
Institute of AI and Autonomous Systems (A²S), Hochschule Bonn-Rhein-Sieg, Sankt Augustin, Germany; Fraunhofer Institute for Intelligent Analysis and Information Systems, Sankt Augustin, Germany
F
Fabian Huppertz
Institute of AI and Autonomous Systems (A²S), Hochschule Bonn-Rhein-Sieg, Sankt Augustin, Germany
A
Alex Mitrevski
Division for Systems and Control, Chalmers University of Technology, Gothenburg, Sweden; Fraunhofer Institute for Intelligent Analysis and Information Systems, Sankt Augustin, Germany
Sebastian Houben
Sebastian Houben
University of Applied Sciences Bonn-Rhein-Sieg
Real-time Computer VisionTrustworthy AI