Hi-SAM: A Hierarchical Structure-Aware Multi-modal Framework for Large-Scale Recommendation

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of multimodal recommendation systems, which often suffer from semantic token redundancy or collapse and neglect the hierarchical structure among users, items, and tokens, leading to attention biases toward local details that impair holistic semantic modeling. To this end, the authors propose the Hi-SAM framework, which first introduces a Disentangled Semantic Tokenizer (DST) to separate cross-modal shared semantics from modality-specific details. Subsequently, a Hierarchical Memory Anchor Transformer (HMAT) is developed to explicitly model the tripartite hierarchy through geometric-aware alignment, coarse-to-fine quantization, mutual information minimization, and hierarchical RoPE positional encoding, effectively compressing historical interactions. Extensive experiments demonstrate that Hi-SAM outperforms state-of-the-art methods across multiple real-world datasets, with particularly notable gains in cold-start scenarios. The framework has been deployed on a platform serving tens of millions of users, yielding a 6.55% improvement in core metrics.

Technology Category

Application Category

📝 Abstract
Multi-modal recommendation has gained traction as items possess rich attributes like text and images. Semantic ID-based approaches effectively discretize this information into compact tokens. However, two challenges persist: (1) Suboptimal Tokenization: existing methods (e.g., RQ-VAE) lack disentanglement between shared cross-modal semantics and modality-specific details, causing redundancy or collapse; (2) Architecture-Data Mismatch: vanilla Transformers treat semantic IDs as flat streams, ignoring the hierarchy of user interactions, items, and tokens. Expanding items into multiple tokens amplifies length and noise, biasing attention toward local details over holistic semantics. We propose Hi-SAM, a Hierarchical Structure-Aware Multi-modal framework with two designs: (1) Disentangled Semantic Tokenizer (DST): unifies modalities via geometry-aware alignment and quantizes them via a coarse-to-fine strategy. Shared codebooks distill consensus while modality-specific ones recover nuances from residuals, enforced by mutual information minimization; (2) Hierarchical Memory-Anchor Transformer (HMAT): splits positional encoding into inter- and intra-item subspaces via Hierarchical RoPE to restore hierarchy. It inserts Anchor Tokens to condense items into compact memory, retaining details for the current item while accessing history only through compressed summaries. Experiments on real-world datasets show consistent improvements over SOTA baselines, especially in cold-start scenarios. Deployed on a large-scale social platform serving millions of users, Hi-SAM achieved a 6.55% gain in the core online metric.
Problem

Research questions and friction points this paper is trying to address.

multi-modal recommendation
semantic tokenization
hierarchical structure
cross-modal disentanglement
attention bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangled Semantic Tokenizer
Hierarchical Memory-Anchor Transformer
Multi-modal Recommendation
Hierarchical RoPE
Semantic ID
🔎 Similar Papers
No similar papers found.
P
Pingjun Pan
NetEase Cloud Music, NetEase, Hangzhou, China
T
Tingting Zhou
NetEase Cloud Music, NetEase, Hangzhou, China
P
Peiyao Lu
NetEase Cloud Music, NetEase, Hangzhou, China
T
Tingting Fei
NetEase Cloud Music, NetEase, Hangzhou, China
Hongxiang Chen
Hongxiang Chen
Student, University College London
Quantum Machine Learning
Chuanjiang Luo
Chuanjiang Luo
Google Inc.
Geometric computingspectral shape analysis