🤖 AI Summary
This work addresses the limited expressiveness of conventional Transformer readout mechanisms and the prototype collapse commonly induced by prototype compression. We propose DDCL-Attention, a soft probabilistic matching–based prototype readout layer that generates compact representations with linear complexity in sequence length and is trained jointly with the encoder. Collapse is mitigated by precisely decomposing the loss into reconstruction and diversity terms, and stable training conditions—along with explicit learning rate constraints—are derived via Tikhonov singular perturbation theory. The method unifies readout, differentiable codebooks, and hierarchical compression within a single framework. Experiments demonstrate the efficacy of the loss decomposition, improved prototype separation, and full utilization of the codebook, consistently outperforming standard hard quantization approaches across four benchmarks and scientific tabular tasks such as orbital debris classification.
📝 Abstract
DDCL-Attention is a prototype-based readout layer for transformer encoders that replaces simple pooling methods, such as mean pooling or class tokens, with a learned compression mechanism. It uses a small set of global prototype vectors and assigns tokens to them through soft probabilistic matching, producing compact token summaries at linear complexity in sequence length.
The method offers three main advantages. First, it avoids prototype collapse through an exact decomposition of the training loss into a reconstruction term and a diversity term, ensuring that prototypes remain distinct. Second, its joint training with the encoder is shown to be stable under a practical timescale condition, using Tikhonov's singular perturbation theory and explicit learning-rate constraints. Third, the same framework supports three uses: a final readout layer, a differentiable codebook extending VQ-VAE, and a hierarchical document compressor.
Experiments on four datasets confirm the theoretical predictions: the loss decomposition holds exactly, prototype separation grows as expected when the stability condition is met, and the codebook reaches full utilization, outperforming standard hard vector quantization. An additional study on orbital debris classification shows that the method also applies beyond standard NLP and vision tasks, including scientific tabular data.