🤖 AI Summary
This work addresses the significant performance degradation commonly observed in compressed deep learning models under resource-constrained settings. To mitigate this issue, the authors propose Alignment Adapter (AlAd), a lightweight sliding-window adapter that effectively restores model performance by aligning token-level embeddings of the compressed model with those of the original large model. AlAd is agnostic to the underlying compression method, enabling flexible cross-architecture and cross-dimensional alignment. It supports both plug-and-play deployment and joint fine-tuning. Experimental results on BERT-family models across three token-level NLP tasks demonstrate that AlAd achieves substantial performance gains with minimal overhead in model size and latency.
📝 Abstract
Compressed Deep Learning (DL) models are essential for deployment in resource-constrained environments. But their performance often lags behind their large-scale counterparts. To bridge this gap, we propose Alignment Adapter (AlAd): a lightweight, sliding-window-based adapter. It aligns the token-level embeddings of a compressed model with those of the original large model. AlAd preserves local contextual semantics, enables flexible alignment across differing dimensionalities or architectures, and is entirely agnostic to the underlying compression method. AlAd can be deployed in two ways: as a plug-and-play module over a frozen compressed model, or by jointly fine-tuning AlAd with the compressed model for further performance gains. Through experiments on BERT-family models across three token-level NLP tasks, we demonstrate that AlAd significantly boosts the performance of compressed models with only marginal overhead in size and latency.