HyperMLP: An Integrated Perspective for Sequence Modeling

📅 2026-02-13
📈 Citations: 0
Influential: 0
📄 PDF

Technology Category

Application Category

📝 Abstract
Self-attention is often viewed as probabilistic query-key lookup, motivating designs that preserve normalized attention scores and fixed positional semantics. We advocate a simpler and more unified perspective: an autoregressive attention head can be viewed as a dynamic two-layer MLP whose weights are instantiated from the context history. From this view, attention scores form an ever-growing hidden representation, and standard MLP activations such as ReLU or GLU naturally implement input-conditioned selection over a context-dependent memory pool rather than a probability distribution. Based on this formulation, we introduce HyperMLP and HyperGLU, which learn dynamic mixing in both feature space and sequence space, using a reverse-offset (lag) layout to align temporal mixing with autoregressive semantics. We provide theoretical characterizations of the expressivity and implications of this structure, and empirically show that HyperMLP/HyperGLU consistently outperform strong softmax-attention baselines under matched parameter budgets.
Problem

Research questions and friction points this paper is trying to address.

sequence modeling
self-attention
attention mechanism
autoregressive modeling
dynamic representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

HyperMLP
dynamic MLP
autoregressive modeling
sequence modeling
attention reinterpretation
🔎 Similar Papers
No similar papers found.