Bi-Level Optimization for Generative Recommendation: Bridging Tokenization and Generation

πŸ“… 2025-10-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In generative recommendation, item tokenization and recommendation modeling are typically decoupled, leading to tokenizers lacking recommendation-aware guidance and causing misalignment between token informativeness and recommendation consistency. To address this, we propose BLOGERβ€”a novel framework that jointly optimizes tokenization and generative recommendation as a bi-level optimization problem: the upper level adapts the tokenizer to produce recommendation-friendly tokens, while the lower level trains the generative recommender. To mitigate gradient conflicts between levels, we introduce a gradient surgery mechanism and employ meta-learning for efficient optimization. BLOGER enables end-to-end joint training within an autoregressive generation paradigm. Extensive experiments on multiple real-world datasets demonstrate that BLOGER significantly outperforms state-of-the-art methods in recommendation accuracy, with consistent performance gains and manageable computational overhead. It effectively bridges the semantic gap between tokenization and generative recommendation.

Technology Category

Application Category

πŸ“ Abstract
Generative recommendation is emerging as a transformative paradigm by directly generating recommended items, rather than relying on matching. Building such a system typically involves two key components: (1) optimizing the tokenizer to derive suitable item identifiers, and (2) training the recommender based on those identifiers. Existing approaches often treat these components separately--either sequentially or in alternation--overlooking their interdependence. This separation can lead to misalignment: the tokenizer is trained without direct guidance from the recommendation objective, potentially yielding suboptimal identifiers that degrade recommendation performance. To address this, we propose BLOGER, a Bi-Level Optimization for GEnerative Recommendation framework, which explicitly models the interdependence between the tokenizer and the recommender in a unified optimization process. The lower level trains the recommender using tokenized sequences, while the upper level optimizes the tokenizer based on both the tokenization loss and recommendation loss. We adopt a meta-learning approach to solve this bi-level optimization efficiently, and introduce gradient surgery to mitigate gradient conflicts in the upper-level updates, thereby ensuring that item identifiers are both informative and recommendation-aligned. Extensive experiments on real-world datasets demonstrate that BLOGER consistently outperforms state-of-the-art generative recommendation methods while maintaining practical efficiency with no significant additional computational overhead, effectively bridging the gap between item tokenization and autoregressive generation.
Problem

Research questions and friction points this paper is trying to address.

Optimizing tokenizer and recommender interdependence in generative recommendation systems
Addressing misalignment between tokenization quality and recommendation performance
Developing unified bi-level optimization for joint tokenizer-recommender training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bi-level optimization unifies tokenizer and recommender training
Meta-learning approach efficiently solves bi-level optimization problem
Gradient surgery mitigates conflicts between tokenization and recommendation objectives
πŸ”Ž Similar Papers
No similar papers found.