🤖 AI Summary
In CTR prediction, excessively large embedding tables often cause GPU memory overflow and high CPU–GPU transfer latency; existing compression methods typically require model-architecture modifications, limiting generalizability. This paper proposes MEC, a model-agnostic embedding compression framework. MEC innovatively integrates popularity-weighted regularization with contrastive learning: the former promotes balanced allocation of high- and low-frequency features across quantization codebook entries, while the latter enhances inter-center separability and intra-codebook uniformity. Coupled with embedding quantization and a generic compression architecture, MEC achieves over 50× reduction in embedding memory footprint across three public benchmarks, while maintaining or even improving CTR prediction accuracy. Consequently, MEC significantly alleviates GPU memory bottlenecks and cross-device communication overhead. As a plug-and-play solution, MEC enables efficient, scalable embedding compression for large-scale industrial recommender systems without architectural constraints.
📝 Abstract
Accurate click-through rate (CTR) prediction is vital for online advertising and recommendation systems. Recent deep learning advancements have improved the ability to capture feature interactions and understand user interests. However, optimizing the embedding layer often remains overlooked. Embedding tables, which represent categorical and sequential features, can become excessively large, surpassing GPU memory limits and necessitating storage in CPU memory. This results in high memory consumption and increased latency due to frequent GPU-CPU data transfers. To tackle these challenges, we introduce a Model-agnostic Embedding Compression (MEC) framework that compresses embedding tables by quantizing pre-trained embeddings, without sacrificing recommendation quality. Our approach consists of two stages: first, we apply popularity-weighted regularization to balance code distribution between high- and low-frequency features. Then, we integrate a contrastive learning mechanism to ensure a uniform distribution of quantized codes, enhancing the distinctiveness of embeddings. Experiments on three datasets reveal that our method reduces memory usage by over 50x while maintaining or improving recommendation performance compared to existing models. The implementation code is accessible in our project repository https://github.com/USTC-StarTeam/MEC.