Fusion and Alignment Enhancement with Large Language Models for Tail-item Sequential Recommendation

📅 2026-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the underrepresentation of tail items in sequential recommendation due to sparse interactions and the structural misalignment between collaborative signals and semantic knowledge embedding spaces. To tackle these challenges, the authors propose FAERec, a framework that dynamically fuses ID-based embeddings with semantic embeddings generated by large language models through an adaptive gating mechanism. FAERec further introduces a dual-level alignment strategy—comprising item-level contrastive learning and feature-level correlation constraints—to enhance the representation quality of tail items. A curriculum learning scheduler is integrated to coordinate the optimization process. Experiments on three mainstream datasets demonstrate that FAERec significantly improves recommendation accuracy for tail items and exhibits strong generalization across various backbone models.
📝 Abstract
Sequential Recommendation (SR) learns user preferences from their historical interaction sequences and provides personalized suggestions. In real-world scenarios, most items exhibit sparse interactions, known as the tail-item problem. This issue limits the model's ability to accurately capture item transition patterns. To tackle this, large language models (LLMs) offer a promising solution by capturing semantic relationships between items. Despite previous efforts to leverage LLM-derived embeddings for enriching tail items, they still face the following limitations: 1) They struggle to effectively fuse collaborative signals with semantic knowledge, leading to suboptimal item embedding quality. 2) Existing methods overlook the structural inconsistency between the ID and LLM embedding spaces, causing conflicting signals that degrade recommendation accuracy. In this work, we propose a Fusion and Alignment Enhancement framework with LLMs for Tail-item Sequential Recommendation (FAERec), which improves item representations by generating coherently-fused and structurally consistent embeddings. For the information fusion challenge, we design an adaptive gating mechanism that dynamically fuses ID and LLM embeddings. Then, we propose a dual-level alignment approach to mitigate structural inconsistency. The item-level alignment establishes correspondences between ID and LLM embeddings of the same item through contrastive learning, while the feature-level alignment constrains the correlation patterns between corresponding dimensions across the two embedding spaces. Furthermore, the weights of the two alignments are adjusted by a curriculum learning scheduler to avoid premature optimization of the complex feature-level objective. Extensive experiments across three widely used datasets with multiple representative SR backbones demonstrate the effectiveness and generalizability of our framework.
Problem

Research questions and friction points this paper is trying to address.

Tail-item problem
Sequential Recommendation
Embedding fusion
Structural inconsistency
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tail-item Recommendation
Large Language Models
Embedding Fusion
Dual-level Alignment
Sequential Recommendation
🔎 Similar Papers
No similar papers found.