Generative Data Transformation: From Mixed to Unified Data

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of cross-domain recommendation, where domain discrepancies often lead to negative transfer and existing model-centric approaches struggle to effectively capture unstructured sequential dependencies, resulting in limited generalization and high computational costs. To overcome these limitations, we propose Taesar, a novel framework that pioneers a data-generation-centric paradigm for cross-domain alignment. Taesar leverages a contrastive decoding mechanism to adaptively fuse contextual information from multiple auxiliary domains, generating unified training sequences that are both aligned and information-rich. This approach eliminates reliance on complex model architectures by enabling synergistic optimization of data and model. Extensive experiments demonstrate that Taesar significantly outperforms state-of-the-art model-centric methods across multiple benchmarks, consistently improving recommendation performance while seamlessly integrating with diverse sequential models and effectively alleviating data sparsity and cold-start issues.

Technology Category

Application Category

📝 Abstract
Recommendation model performance is intrinsically tied to the quality, volume, and relevance of their training data. To address common challenges like data sparsity and cold start, recent researchs have leveraged data from multiple auxiliary domains to enrich information within the target domain. However, inherent domain gaps can degrade the quality of mixed-domain data, leading to negative transfer and diminished model performance. Existing prevailing \emph{model-centric} paradigm -- which relies on complex, customized architectures -- struggles to capture the subtle, non-structural sequence dependencies across domains, leading to poor generalization and high demands on computational resources. To address these shortcomings, we propose \textsc{Taesar}, a \emph{data-centric} framework for \textbf{t}arget-\textbf{a}lign\textbf{e}d \textbf{s}equenti\textbf{a}l \textbf{r}egeneration, which employs a contrastive decoding mechanism to adaptively encode cross-domain context into target-domain sequences. It employs contrastive decoding to encode cross-domain context into target sequences, enabling standard models to learn intricate dependencies without complex fusion architectures. Experiments show \textsc{Taesar} outperforms model-centric solutions and generalizes to various sequential models. By generating enriched datasets, \textsc{Taesar} effectively combines the strengths of data- and model-centric paradigms. The code accompanying this paper is available at~ \textcolor{blue}{https://github.com/USTC-StarTeam/Taesar}.
Problem

Research questions and friction points this paper is trying to address.

data sparsity
cold start
domain gap
negative transfer
cross-domain sequential dependencies
Innovation

Methods, ideas, or system contributions that make the work stand out.

data-centric learning
contrastive decoding
cross-domain recommendation
sequential data generation
negative transfer mitigation
🔎 Similar Papers
No similar papers found.