π€ AI Summary
This work addresses the pervasive performance imbalance between head and tail items in large language modelβbased recommender systems (LRS) under long-tailed data distributions. It presents the first systematic analysis of the dual effects of prior-induced and data-driven long-tailedness in LRS and introduces Efficient Item-level Sharpness-Aware Minimization (EISAM), the first optimization framework tailored to mitigate long-tail challenges in such systems. EISAM incorporates a computationally efficient sharpness penalty via item-level adaptive regularization of the loss landscape, yielding a theoretically grounded generalization error bound with faster convergence rates. Extensive experiments on three real-world datasets demonstrate that EISAM significantly improves recommendation performance for tail items while maintaining overall accuracy, thereby validating its effectiveness and scalability.
π Abstract
Large Language Model-based Recommender Systems (LRSs) have recently emerged as a new paradigm in sequential recommendation by directly adopting LLMs as backbones. While LRSs demonstrate strong knowledge utilization and instruction-following abilities, they have not been systematically studied under the long-standing long-tail problem. In this paper, we conduct an empirical study and reveal that LRSs face two distinct types of long-tail: i) prior long-tail, inherited implicitly from pretraining corpora, and ii) data long-tail, originating from skewed recommendation datasets. Our analysis shows that both contribute to the performance disparity between head and tail items, with the intersection of the two heads exhibiting an even stronger head effect. Nevertheless, the overall performance distribution in LRSs, especially on the tail, remains dominated by the data long-tail. To address this challenge, we propose Efficient Item-wise Sharpness-Aware Minimization (EISAM), a novel optimization framework that improves tail-item performance by adaptively regularizing the loss landscape at the item level. EISAM introduces an efficient penalty design that captures fine-grained item-specific sharpness while maintaining computational scalability for LLMs. In addition, we derive a generalization bound for EISAM. Our theoretical analysis shows that the bound decreases at a faster rate under our item-wise regularization, offering theoretical support for its effectiveness. Extensive experiments on three real-world datasets demonstrate that EISAM significantly boosts tail-item recommendation performance while preserving overall quality, establishing the first systematic solution to the long-tail problem in LRSs.