π€ AI Summary
This work addresses the limitation of conventional sequential recommendation systems that represent items with a single embedding, which fails to capture their multifaceted nature and usersβ complex preferences across multiple dimensions. To this end, the authors propose a multi-faceted multi-head mixture-of-experts architecture: within a multi-head attention framework, each head employs sub-embeddings to model distinct semantic aspects of items, and a gating mechanism dynamically fuses the predictions from all heads. Inside each head, a mixture-of-experts network with learnable routing disentangles diverse user preferences within that semantic dimension. Furthermore, the model integrates a text-enhanced pre-trained encoder with supervised contrastive learning to enrich the semantic quality of embeddings. Experiments demonstrate that the proposed approach significantly improves recommendation performance, yielding item representations that are both semantically richer and structurally more coherent, effectively capturing usersβ dynamic multi-dimensional preferences.
π Abstract
Sequential recommendation (SR) systems excel at capturing users'dynamic preferences by leveraging their interaction histories. Most existing SR systems assign a single embedding vector to each item to represent its features, adopting various models to combine these embeddings into a sequence representation that captures user intent. However, we argue that this representation alone is insufficient to capture an item's multi-faceted nature (e.g., movie genres, starring actors). Furthermore, users often exhibit complex and varied preferences within these facets (e.g., liking both action and musical films within the genre facet), which are challenging to fully represent with static identifiers. To address these issues, we propose a novel architecture titled Facet-Aware Multi-Head Mixture-of-Experts Model for Sequential Recommendation (FAME). We leverage sub-embeddings from each head in the final multi-head attention layer to predict the next item separately, effectively capturing distinct item facets. A gating mechanism then integrates these predictions by dynamically determining their importance. Additionally, we introduce a Mixture-of-Experts (MoE) network within each attention head to disentangle varied user preferences within each facet, utilizing a learnable router network to aggregate expert outputs based on context. Complementing this architecture, we design a Text-Enhanced Facet-Aware Pre-training module to overcome the limitations of randomly initialized embeddings. By utilizing a pre-trained text encoder and employing an alternating supervised contrastive learning objective, we explicitly disentangle facet-specific features from textual metadata (e.g., descriptions) before sequential training begins. This ensures that the item embeddings are semantically robust and aligned with the downstream multi-facet framework.