Rethinking Progression of Memory State in Robotic Manipulation: An Object-Centric Perspective

๐Ÿ“… 2025-11-14
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In robotic manipulation, visual similarity among objects induces partial observability, exacerbating the challenge of object-level historical memory and reasoning in non-Markovian environments. Method: We propose Embodied-SlotSSMโ€”a modular architecture centered on object-centric, updateable slot-based state-space models (SlotSSMs), augmented with a relational encoder to enforce spatiotemporal consistency in object memory, and integrated into a vision-language-action (VLA) framework. Contribution/Results: To rigorously evaluate non-Markovian decision-making, we introduce LIBERO-Memโ€”the first benchmark explicitly designed for memory evolution. Experiments demonstrate that Embodied-SlotSSM significantly outperforms baselines on LIBERO-Mem and general long-horizon tasks, achieving the first scalable, efficient modeling of object interaction histories. This work establishes a novel paradigm for persistent object representation in embodied intelligence.

Technology Category

Application Category

๐Ÿ“ Abstract
As embodied agents operate in increasingly complex environments, the ability to perceive, track, and reason about individual object instances over time becomes essential, especially in tasks requiring sequenced interactions with visually similar objects. In these non-Markovian settings, key decision cues are often hidden in object-specific histories rather than the current scene. Without persistent memory of prior interactions (what has been interacted with, where it has been, or how it has changed) visuomotor policies may fail, repeat past actions, or overlook completed ones. To surface this challenge, we introduce LIBERO-Mem, a non-Markovian task suite for stress-testing robotic manipulation under object-level partial observability. It combines short- and long-horizon object tracking with temporally sequenced subgoals, requiring reasoning beyond the current frame. However, vision-language-action (VLA) models often struggle in such settings, with token scaling quickly becoming intractable even for tasks spanning just a few hundred frames. We propose Embodied-SlotSSM, a slot-centric VLA framework built for temporal scalability. It maintains spatio-temporally consistent slot identities and leverages them through two mechanisms: (1) slot-state-space modeling for reconstructing short-term history, and (2) a relational encoder to align the input tokens with action decoding. Together, these components enable temporally grounded, context-aware action prediction. Experiments show Embodied-SlotSSM's baseline performance on LIBERO-Mem and general tasks, offering a scalable solution for non-Markovian reasoning in object-centric robotic policies.
Problem

Research questions and friction points this paper is trying to address.

Addressing object-level partial observability in robotic manipulation tasks
Solving non-Markovian reasoning challenges with persistent memory requirements
Developing scalable vision-language-action models for long-horizon object tracking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Slot-centric VLA framework for temporal scalability
Slot-state-space modeling reconstructs short-term history
Relational encoder aligns input tokens with actions
๐Ÿ”Ž Similar Papers
No similar papers found.
N
Nhat Chung
FPT Software AI Center
T
Taisei Hanyu
University of Arkansas
T
Toan Nguyen
FPT Software AI Center
H
Huy Le
FPT Software AI Center
F
Frederick Bumgarner
University of Arkansas
D
Duy Minh Ho Nguyen
University of Stuttgart
K
Khoa T. Vo
University of Arkansas
Kashu Yamazaki
Kashu Yamazaki
Carnegie Mellon University, Genesis AI
Robot LearningPhysical AIMultimodal AI
Chase Rainwater
Chase Rainwater
University of Arkansas
logisticsoptimizationsecurity
Tung Kieu
Tung Kieu
Aalborg University, Department of Computer Science
Data MiningData ManagementSpatio-Temporal DataTime Series Analysis
A
Anh Nguyen
University of Liverpool
Ngan Le
Ngan Le
University of Arkansas
Artificial IntelligenceMachine LearningComputer Vision