🤖 AI Summary
This work addresses the challenge of achieving tactile-joint state co-decision for mobile manipulators under stringent computational constraints. Methodologically, it introduces a hetero-associative sequence memory model: joint angles are represented via population coding; Izhikevich neurons convert tactile force into spike-rate features; 3D rotational positional embeddings—modulated by force direction via subspace rotation—enhance separability in binary vector space; and element-wise bipolar binding coupled with softmax-weighted recall enables geometry-aware fuzzy retrieval and continuous action generation. The key contribution is the first compact neuromorphic associative memory system explicitly designed for compliant control and multi-joint grasping sequences. It exhibits rapid training, strong generalization, and has been validated on the Toyota Human Support Robot, demonstrating both single-joint compliant motion and full-arm sequential execution. The framework supports imitation learning and seamless multimodal extension.
📝 Abstract
This paper presents a hetero-associative sequential memory system for mobile manipulators that learns compact, neuromorphic bindings between robot joint states and tactile observations to produce step-wise action decisions with low compute and memory cost. The method encodes joint angles via population place coding and converts skin-measured forces into spike-rate features using an Izhikevich neuron model; both signals are transformed into bipolar binary vectors and bound element-wise to create associations stored in a large-capacity sequential memory. To improve separability in binary space and inject geometry from touch, we introduce 3D rotary positional embeddings that rotate subspaces as a function of sensed force direction, enabling fuzzy retrieval through a softmax weighted recall over temporally shifted action patterns. On a Toyota Human Support Robot covered by robot skin, the hetero-associative sequential memory system realizes a pseudocompliance controller that moves the link under touch in the direction and with speed correlating to the amplitude of applied force, and it retrieves multi-joint grasp sequences by continuing tactile input. The system sets up quickly, trains from synchronized streams of states and observations, and exhibits a degree of generalization while remaining economical. Results demonstrate single-joint and full-arm behaviors executed via associative recall, and suggest extensions to imitation learning, motion planning, and multi-modal integration.