Barlow Twins for Sequential Recommendation

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address core challenges in sequential recommendation—including interaction sparsity, popularity bias, and the difficulty of balancing accuracy and diversity—this paper proposes BT-SR, the first framework to integrate non-contrastive self-supervised learning (specifically, Barlow Twins) into sequential recommendation. BT-SR eliminates reliance on negative sampling and hand-crafted data augmentation; instead, it leverages redundancy reduction to align users’ short-term behavioral patterns while preserving distinctions in their long-term interests. A single tunable hyperparameter enables flexible control over the accuracy–diversity trade-off. Evaluated on five public benchmarks, BT-SR achieves significant improvements in next-item prediction accuracy, while simultaneously enhancing coverage of long-tail items and improving recommendation calibration—thereby effectively mitigating popularity bias.

Technology Category

Application Category

📝 Abstract
Sequential recommendation models must navigate sparse interaction data popularity bias and conflicting objectives like accuracy versus diversity While recent contrastive selfsupervised learning SSL methods offer improved accuracy they come with tradeoffs large batch requirements reliance on handcrafted augmentations and negative sampling that can reinforce popularity bias In this paper we introduce BT-SR a novel noncontrastive SSL framework that integrates the Barlow Twins redundancyreduction principle into a Transformerbased nextitem recommender BTSR learns embeddings that align users with similar shortterm behaviors while preserving longterm distinctionswithout requiring negative sampling or artificial perturbations This structuresensitive alignment allows BT-SR to more effectively recognize emerging user intent and mitigate the influence of noisy historical context Our experiments on five public benchmarks demonstrate that BTSR consistently improves nextitem prediction accuracy and significantly enhances longtail item coverage and recommendation calibration Crucially we show that a single hyperparameter can control the accuracydiversity tradeoff enabling practitioners to adapt recommendations to specific application needs
Problem

Research questions and friction points this paper is trying to address.

Addresses sparse data and popularity bias in sequential recommendation
Eliminates negative sampling and artificial data augmentation requirements
Balances accuracy-diversity tradeoff through controllable hyperparameter optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Barlow Twins redundancy-reduction in Transformer
Learns embeddings without negative sampling or augmentations
Controls accuracy-diversity tradeoff with single hyperparameter
🔎 Similar Papers
No similar papers found.
I
Ivan Razvorotnev
Skoltech, Higher School of Economics
M
Marina Munkhoeva
AIRI
Evgeny Frolov
Evgeny Frolov
AIRI
Recommender SystemsTensor FactorizationHyperbolic Geometry