🤖 AI Summary
This work addresses the challenge of non-Markovian observations in long-horizon robotic tasks, where occlusions or environmental changes hinder reliable decision-making. To this end, the authors propose Chameleon, a system inspired by human episodic memory that constructs a geometry-anchored, multimodal memory mechanism to preserve fine-grained perceptual cues. Chameleon incorporates a differentiable memory stack enabling goal-directed episodic recall, thereby circumventing the loss of critical contextual information inherent in conventional semantic compression approaches. Experimental evaluations on the Camo-Dataset and a real-world UR5e robotic platform demonstrate that Chameleon substantially outperforms strong baselines, significantly enhancing decision reliability and control performance in perceptually ambiguous scenarios.
📝 Abstract
Robotic manipulation often requires memory: occlusion and state changes can make decision-time observations perceptually aliased, making action selection non-Markovian at the observation level because the same observation may arise from different interaction histories. Most embodied agents implement memory via semantically compressed traces and similarity-based retrieval, which discards disambiguating fine-grained perceptual cues and can return perceptually similar but decision-irrelevant episodes. Inspired by human episodic memory, we propose Chameleon, which writes geometry-grounded multimodal tokens to preserve disambiguating context and produces goal-directed recall through a differentiable memory stack. We also introduce Camo-Dataset, a real-robot UR5e dataset spanning episodic recall, spatial tracking, and sequential manipulation under perceptual aliasing. Across tasks, Chameleon consistently improves decision reliability and long-horizon control over strong baselines in perceptually confusable settings.