HiPPO Zoo: Explicit Memory Mechanisms for Interpretable State Space Models

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of efficient and interpretable memory mechanisms in sequence modeling by proposing the “HiPPO Zoo,” a unified framework that extends the HiPPO theory to construct five explicit memory architectures. The approach embeds key capabilities of modern state space models (SSMs)—such as online adaptive memory allocation and associative recall—into interpretable structures based on orthogonal polynomials. By integrating structured linear differential equations with streaming-compatible training, the method enables efficient online updates while preserving theoretical transparency. Experimental results demonstrate that the proposed models effectively replicate the performance of contemporary SSMs across multiple synthetic tasks, all while maintaining clear interpretability and controllability of the memory dynamics.

Technology Category

Application Category

📝 Abstract
Representing the past in a compressed, efficient, and informative manner is a central problem for systems trained on sequential data. The HiPPO framework, originally proposed by Gu & Dao et al., provides a principled approach to sequential compression by projecting signals onto orthogonal polynomial (OP) bases via structured linear ordinary differential equations. Subsequent works have embedded these dynamics in state space models (SSMs), where HiPPO structure serves as an initialization. Nonlinear successors of these SSM methods such as Mamba are state-of-the-art for many tasks with long-range dependencies, but the mechanisms by which they represent and prioritize history remain largely implicit. In this work, we revisit the HiPPO framework with the goal of making these mechanisms explicit. We show how polynomial representations of history can be extended to support capabilities of modern SSMs such as adaptive allocation of memory and associative memory while retaining direct interpretability in the OP basis. We introduce a unified framework comprising five such extensions, which we collectively refer to as a "HiPPO zoo." Each extension exposes a specific modeling capability through an explicit, interpretable modification of the HiPPO framework. The resulting models adapt their memory online and train in streaming settings with efficient updates. We illustrate the behaviors and modeling advantages of these extensions through a range of synthetic sequence modeling tasks, demonstrating that capabilities typically associated with modern SSMs can be realized through explicit, interpretable polynomial memory structures.
Problem

Research questions and friction points this paper is trying to address.

interpretable memory
state space models
HiPPO
sequential data
history representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

HiPPO
state space models
interpretable memory
orthogonal polynomials
adaptive memory
🔎 Similar Papers
No similar papers found.
J
Jack Goffinet
Department of Computer Science, Duke University, Durham NC, USA
C
Casey Hanks
Department of Computer Science, Duke University, Durham NC, USA
David E. Carlson
David E. Carlson
Associate Professor, Duke University
Machine LearningDeep LearningData ScienceEnvironmental HealthBrain Models