Leveraging Large Language Models (LLMs) to Empower Training-Free Dataset Condensation for Content-Based Recommendation

📅 2023-10-15
🏛️ arXiv.org
📈 Citations: 11
Influential: 0
📄 PDF
🤖 AI Summary
To address the high training cost and poor scalability of conventional text-content recommendation systems, this paper proposes a training-free, LLM-driven two-tier data compression method. At the content level, it preserves critical semantic information via title-based semantic rewriting; at the user level, it generates representative interaction samples through interest-aware clustering. Unlike traditional embedding compression paradigms reliant on gradient-based optimization, our approach introduces a novel forward-only, fine-tuning-free, and iteration-free dataset synthesis framework. Evaluated on three real-world datasets—including MIND—our method achieves 97% of full-dataset recommendation performance using only 5% of the original data, attaining a 95% compression ratio while maintaining modeling efficiency. This work establishes a scalable, low-overhead paradigm for content recommendation in resource-constrained settings.
📝 Abstract
Modern techniques in Content-based Recommendation (CBR) leverage item content information to provide personalized services to users, but suffer from resource-intensive training on large datasets. To address this issue, we explore the dataset condensation for textual CBR in this paper. The goal of dataset condensation is to synthesize a small yet informative dataset, upon which models can achieve performance comparable to those trained on large datasets. While existing condensation approaches are tailored to classification tasks for continuous data like images or embeddings, direct application of them to CBR has limitations. To bridge this gap, we investigate efficient dataset condensation for content-based recommendation. Inspired by the remarkable abilities of large language models (LLMs) in text comprehension and generation, we leverage LLMs to empower the generation of textual content during condensation. To handle the interaction data involving both users and items, we devise a dual-level condensation method: content-level and user-level. At content-level, we utilize LLMs to condense all contents of an item into a new informative title. At user-level, we design a clustering-based synthesis module, where we first utilize LLMs to extract user interests. Then, the user interests and user embeddings are incorporated to condense users and generate interactions for condensed users. Notably, the condensation paradigm of this method is forward and free from iterative optimization on the synthesized dataset. Extensive empirical findings from our study, conducted on three authentic datasets, substantiate the efficacy of the proposed method. Particularly, we are able to approximate up to 97% of the original performance while reducing the dataset size by 95% (i.e., on dataset MIND).
Problem

Research questions and friction points this paper is trying to address.

Efficient dataset condensation for content-based recommendation
Generating informative condensed datasets using large language models
Reducing dataset size while maintaining high recommendation performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses LLMs for text condensation
Implements dual-level condensation method
Achieves high performance with reduced data
🔎 Similar Papers
No similar papers found.
Jiahao Wu
Jiahao Wu
The Chinese University of Hong Kong
Medical RobotsRobot-assisted MicrosurgeryMotion Planning
Q
Qijiong Liu
The Hong Kong Polytechnic University, Hong Kong, China
Hengchang Hu
Hengchang Hu
National University of Singapore
Recommender SystemGraph Neural Network
W
Wenqi Fan
The Hong Kong Polytechnic University, Hong Kong, China
Shengcai Liu
Shengcai Liu
Southern University of Science and Technology
Learn to OptimizeLLM+Optimization
Q
Qing Li
The Hong Kong Polytechnic University, Hong Kong, China
X
Xiao-Ming Wu
The Hong Kong Polytechnic University, Hong Kong, China
K
Ke Tang
Southern University of Science and Technology, Shenzhen, Guandong, China