LLM-Driven Dual-Level Multi-Interest Modeling for Recommendation

๐Ÿ“… 2025-07-14
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing methods rely on heuristic assumptions, while LLMs face two key challenges in multi-interest modeling: uncontrollable interest granularity and user behavior sparsity. To address these, this paper proposes a dual-level multi-interest modeling framework: at the individual level, it employs LLM-driven semantic clustering for adaptive interest segmentation; at the population level, it constructs behaviorally enriched synthetic users to alleviate data sparsity. The framework innovatively introduces (1) a semantic-collaborative interest alignment module, (2) a maximum-coverage optimization objective, and (3) a contrastive learningโ€“driven representation disentanglement mechanism. Extensive experiments on multiple real-world datasets demonstrate significant improvements over state-of-the-art methods, validating both superior recommendation performance and enhanced interpretability of learned user interests.

Technology Category

Application Category

๐Ÿ“ Abstract
Recently, much effort has been devoted to modeling users' multi-interests based on their behaviors or auxiliary signals. However, existing methods often rely on heuristic assumptions, e.g., co-occurring items indicate the same interest of users, failing to capture user multi-interests aligning with real-world scenarios. While large language models (LLMs) show significant potential for multi-interest analysis due to their extensive knowledge and powerful reasoning capabilities, two key challenges remain. First, the granularity of LLM-driven multi-interests is agnostic, possibly leading to overly fine or coarse interest grouping. Second, individual user analysis provides limited insights due to the data sparsity issue. In this paper, we propose an LLM-driven dual-level multi-interest modeling framework for more effective recommendation. At the user-individual level, we exploit LLMs to flexibly allocate items engaged by users into different semantic clusters, indicating their diverse and distinct interests. To alleviate the agnostic generation of LLMs, we adaptively assign these semantic clusters to users' collaborative multi-interests learned from global user-item interactions, allowing the granularity to be automatically adjusted according to the user's behaviors using an alignment module. To alleviate the limited insights derived from individual users' behaviors, at the user-crowd level, we propose aggregating user cliques into synthesized users with rich behaviors for more comprehensive LLM-driven multi-interest analysis. We formulate a max covering problem to ensure the compactness and representativeness of synthesized users' behaviors, and then conduct contrastive learning based on their LLM-driven multi-interests to disentangle item representations among different interests. Experiments on real-world datasets show the superiority of our approach against state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Modeling user multi-interests without heuristic assumptions
Adjusting LLM-driven interest granularity adaptively
Overcoming data sparsity via crowd-level interest analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven semantic clustering for user interests
Dual-level adaptive granularity interest alignment
Synthesized user clique behavior aggregation
๐Ÿ”Ž Similar Papers
No similar papers found.
Z
Ziyan Wang
Nanyang Technological University, Singapore
Yingpeng Du
Yingpeng Du
Nanyang Technological University
Recommender systemEnsemble learningLLMs
Z
Zhu Sun
Singapore University of Technology and Design, Singapore
Jieyi Bi
Jieyi Bi
PhD candidate @ NTU | M.Eng. & B.Sc. @ SYSU
Deep LearningLearning to Optimize
H
Haoyan Chua
Nanyang Technological University, Singapore
Tianjun Wei
Tianjun Wei
Nanyang Technological University
User ModelingLarge Language ModelRecommender System
J
Jie Zhang
Nanyang Technological University, Singapore