A Universal Framework for Compressing Embeddings in CTR Prediction

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In CTR prediction, excessively large embedding tables often cause GPU memory overflow and high CPU–GPU transfer latency; existing compression methods typically require model-architecture modifications, limiting generalizability. This paper proposes MEC, a model-agnostic embedding compression framework. MEC innovatively integrates popularity-weighted regularization with contrastive learning: the former promotes balanced allocation of high- and low-frequency features across quantization codebook entries, while the latter enhances inter-center separability and intra-codebook uniformity. Coupled with embedding quantization and a generic compression architecture, MEC achieves over 50× reduction in embedding memory footprint across three public benchmarks, while maintaining or even improving CTR prediction accuracy. Consequently, MEC significantly alleviates GPU memory bottlenecks and cross-device communication overhead. As a plug-and-play solution, MEC enables efficient, scalable embedding compression for large-scale industrial recommender systems without architectural constraints.

Technology Category

Application Category

📝 Abstract
Accurate click-through rate (CTR) prediction is vital for online advertising and recommendation systems. Recent deep learning advancements have improved the ability to capture feature interactions and understand user interests. However, optimizing the embedding layer often remains overlooked. Embedding tables, which represent categorical and sequential features, can become excessively large, surpassing GPU memory limits and necessitating storage in CPU memory. This results in high memory consumption and increased latency due to frequent GPU-CPU data transfers. To tackle these challenges, we introduce a Model-agnostic Embedding Compression (MEC) framework that compresses embedding tables by quantizing pre-trained embeddings, without sacrificing recommendation quality. Our approach consists of two stages: first, we apply popularity-weighted regularization to balance code distribution between high- and low-frequency features. Then, we integrate a contrastive learning mechanism to ensure a uniform distribution of quantized codes, enhancing the distinctiveness of embeddings. Experiments on three datasets reveal that our method reduces memory usage by over 50x while maintaining or improving recommendation performance compared to existing models. The implementation code is accessible in our project repository https://github.com/USTC-StarTeam/MEC.
Problem

Research questions and friction points this paper is trying to address.

Reduces memory usage in CTR prediction models
Compresses embedding tables without losing quality
Minimizes latency by optimizing GPU-CPU data transfers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantizes pre-trained embeddings effectively
Employs popularity-weighted regularization technique
Integrates contrastive learning for uniform distribution
🔎 Similar Papers
No similar papers found.
K
Kefan Wang
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, Hefei, China
H
Hao Wang
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, Hefei, China
Kenan Song
Kenan Song
NU, MIT, ASU, UGA
1d textile2d coating3d printingmaterials-manufacturing-mechanics
W
Wei Guo
Huawei Singapore Research Center, Singapore
K
Kai Cheng
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, Hefei, China
Z
Zhi Li
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
Y
Yong Liu
Huawei Singapore Research Center, Singapore
D
Defu Lian
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China, Hefei, China
Enhong Chen
Enhong Chen
University of Science and Technology of China
data miningrecommender systemmachine learning