🤖 AI Summary
This work proposes DARKFormer, a novel approach to linearizing Transformer attention that addresses the performance degradation of existing methods—such as Performer—when applied to pre-trained models with anisotropic query and key representations. Standard attention mechanisms suffer from quadratic complexity, limiting scalability, while current linear approximations exhibit high variance due to isotropic random feature sampling. DARKFormer introduces data-aware kernel geometry into random feature attention by designing a learnable, data-aligned softmax kernel that constructs a covariance structure for the random projections, enabling variance-minimizing importance sampling. This method maintains linear computational complexity while significantly improving attention approximation accuracy, particularly during fine-tuning where anisotropic representations are prevalent. As a result, DARKFormer effectively narrows the performance gap with exact softmax attention and is well-suited for resource-constrained settings.
📝 Abstract
Transformers excel across domains, yet their quadratic attention complexity poses a barrier to scaling. Random-feature attention, as in Performers, can reduce this cost to linear in the sequence length by approximating the softmax kernel with positive random features drawn from an isotropic distribution. In pretrained models, however, queries and keys are typically anisotropic. This induces high Monte Carlo variance in isotropic sampling schemes unless one retrains the model or uses a large feature budget. Importance sampling can address this by adapting the sampling distribution to the input geometry, but complex data-dependent proposal distributions are often intractable. We show that by data aligning the softmax kernel, we obtain an attention mechanism which can both admit a tractable minimal-variance proposal distribution for importance sampling, and exhibits better training stability. Motivated by this finding, we introduce DARKFormer, a Data-Aware Random-feature Kernel transformer that features a data-aligned kernel geometry. DARKFormer learns the random-projection covariance, efficiently realizing an importance-sampled positive random-feature estimator for its data-aligned kernel. Empirically, DARKFormer narrows the performance gap with exact softmax attention, particularly in finetuning regimes where pretrained representations are anisotropic. By combining random-feature efficiency with data-aware kernels, DARKFormer advances kernel-based attention in resource-constrained settings.