🤖 AI Summary
State space models (SSMs) suffer from unclear mechanistic understanding of selective information representation and inefficiency in modeling long-range dependencies. Method: This paper proposes SeRpEnt, an information-aware learnable sequence compression framework. Grounded in the novel insight that Mamba’s selection mechanism fundamentally performs linear information approximation, SeRpEnt introduces an entropy-guided adaptive resampling framework. It jointly optimizes selective state parameters and employs a lightweight differentiable compression module to dynamically identify high-information-density segments and perform non-uniform resampling. Contribution/Results: Evaluated on Long Range Arena and multiple language modeling benchmarks, SeRpEnt achieves a 37% speedup in inference latency and a 42% reduction in memory footprint, while significantly improving both efficiency and accuracy in long-range dependency modeling.
📝 Abstract
State Space Models (SSMs) have recently enjoyed a rise to prominence in the field of deep learning for sequence modeling, especially as an alternative to Transformers. Their success stems from avoiding two well-known drawbacks of attention-based models: quadratic complexity with respect to the sequence length and inability to model long-range dependencies. The SSM variant Mamba has demonstrated performance comparable to Transformers without any form of attention, thanks to the use of a selective mechanism for the state parameters. Selectivity, however, is only evaluated empirically and the reasons of its effectiveness remain unclear. In this work, we show how selectivity is related to the sequence processing. Our analysis shows that selective time intervals in Mamba act as linear approximators of information. Then, we propose our SeRpEnt architecture, a SSM that further exploits selectivity to compress sequences in an information-aware fashion. It employs a resampling mechanism that aggregates elements based on their information content. Our empirical results in the Long Range Arena benchmark and other language modeling tasks show benefits of the SeRpEnt's resampling mechanism.