TagLLM: A Fine-Grained Tag Generation Approach for Note Recommendation

πŸ“… 2026-03-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing tag generation methods in note recommendation systems often suffer from tag redundancy, insufficient user interest guidance, and inadequate fine-grained expression, which collectively limit recommendation performance. To address these limitations, this work proposes TagLLM, a novel approach that leverages a user interest handbook to guide a multimodal chain-of-thought (CoT) mechanism, enabling fine-grained and interpretable tag generation. Furthermore, the authors introduce a tag knowledge distillation strategy to significantly enhance both the generation quality and inference efficiency of smaller models. Online A/B experiments demonstrate that the proposed method substantially improves user experience, yielding a 0.31% increase in average watch time, a 0.96% rise in user interactions, and a remarkable 32.37% boost in page click-through rate under cold-start scenarios.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) have shown promising potential in E-commerce community recommendation. While LLMs and Multimodal LLMs (MLLMs) are widely used to encode notes into implicit embeddings, leveraging their generative capabilities to represent notes with interpretable tags remains unexplored. In the field of tag generation, traditional close-ended methods heavily rely on the design of tag pools, while existing open-ended methods applied directly to note recommendations face two limitations: (1) MLLMs lack guidance during generation, resulting in redundant tags that fail to capture user interests; (2) The generated tags are often coarse and lack fine-grained representation of notes, interfering with downstream recommendations. To address these limitations, we propose TagLLM, a fine-grained tag generation method for note recommendation. TagLLM captures user interests across note categories through a User Interest Handbook and constructs fine-grained tag data using multimodal CoT Extraction. A Tag Knowledge Distillation method is developed to equip small models with competitive generation capabilities, enhancing inference efficiency. In online A/B test, TagLLM increases average view duration per user by 0.31%, average interactions per user by 0.96%, and page view click-through rate in cold-start scenario by 32.37%, demonstrating its effectiveness.
Problem

Research questions and friction points this paper is trying to address.

tag generation
note recommendation
fine-grained representation
user interest
multimodal LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

fine-grained tag generation
User Interest Handbook
multimodal CoT extraction
Tag Knowledge Distillation
note recommendation
πŸ”Ž Similar Papers
No similar papers found.
Z
Zhijian Chen
Department of Computer Science and Technology, Tongji University, Shanghai, China
Likai Wang
Likai Wang
Chang’an University
Medical image analysis
L
Lei Chen
Shanghai Dewu Information Group Co. Ltd., Shanghai, China
Y
Yaguang Dou
Shanghai Dewu Information Group Co. Ltd., Shanghai, China
J
Jialiang Shi
Shanghai Dewu Information Group Co. Ltd., Shanghai, China
T
Tian Qi
Shanghai Dewu Information Group Co. Ltd., Shanghai, China
D
Dongdong Hao
Shanghai Dewu Information Group Co. Ltd., Shanghai, China
M
Mengying Lu
Tsinghua Shenzhen International Graduate School, Tsinghua University, Beijing, China
C
Cheng Ye
Shanghai Dewu Information Group Co. Ltd., Shanghai, China
Chao Wei
Chao Wei
Qualcomm, nokia, nokia siemens networks
Wireless communications