🤖 AI Summary
To address the efficiency–accuracy trade-off in click-through rate (CTR) prediction for large language model (LLM)-based recommender systems, this paper proposes a lightweight and efficient framework integrating retrieval-augmented generation (RAG) with a multi-head early-exit mechanism. We introduce a confidence-driven multi-head early-exit architecture that enables dynamic inference termination based on per-head output certainty. Additionally, we incorporate a lightweight graph convolutional network (GCN) as the retriever within the RAG pipeline, enabling joint optimization of retrieval and generation. Extensive experiments on mainstream CTR benchmark datasets demonstrate that our method reduces average inference latency by 42.3% while consistently improving AUC by 0.12–0.31 percentage points. The framework thus satisfies industrial requirements for real-time recommendation—delivering both low-latency inference and high predictive accuracy.
📝 Abstract
The deployment of Large Language Models (LLMs) in recommender systems for predicting Click-Through Rates (CTR) necessitates a delicate balance between computational efficiency and predictive accuracy. This paper presents an optimization framework that combines Retrieval-Augmented Generation (RAG) with an innovative multi-head early exit architecture to concurrently enhance both aspects. By integrating Graph Convolutional Networks (GCNs) as efficient retrieval mechanisms, we are able to significantly reduce data retrieval times while maintaining high model performance. The early exit strategy employed allows for dynamic termination of model inference, utilizing real-time predictive confidence assessments across multiple heads. This not only quickens the responsiveness of LLMs but also upholds or improves their accuracy, making it ideal for real-time application scenarios. Our experiments demonstrate how this architecture effectively decreases computation time without sacrificing the accuracy needed for reliable recommendation delivery, establishing a new standard for efficient, real-time LLM deployment in commercial systems.