High Fidelity Textual User Representation over Heterogeneous Sources via Reinforcement Learning

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of unified representation learning from multi-source heterogeneous user texts—such as profiles, job histories, and search logs—in large-scale job platforms. We propose a novel unsupervised framework that leverages reinforcement learning to fuse these diverse textual sources into concise, interpretable user representations tailored for large language models (LLMs). The approach utilizes implicit user interaction signals (e.g., clicks and applications) as the primary reward, augmented with rule-based rewards to constrain output format and length, thereby achieving both business relevance and interpretability without manual annotations. Extensive offline experiments across multiple LinkedIn product lines demonstrate significant improvements in key downstream recommendation metrics, validating the method’s effectiveness, scalability, and practical utility.

Technology Category

Application Category

📝 Abstract
Effective personalization on large-scale job platforms requires modeling members based on heterogeneous textual sources, including profiles, professional data, and search activity logs. As recommender systems increasingly adopt Large Language Models (LLMs), creating unified, interpretable, and concise representations from heterogeneous sources becomes critical, especially for latency-sensitive online environments. In this work, we propose a novel Reinforcement Learning (RL) framework to synthesize a unified textual representation for each member. Our approach leverages implicit user engagement signals (e.g., clicks, applies) as the primary reward to distill salient information. Additionally, the framework is complemented by rule-based rewards that enforce formatting and length constraints. Extensive offline experiments across multiple LinkedIn products, one of the world's largest job platforms, demonstrate significant improvements in key downstream business metrics. This work provides a practical, labeling-free, and scalable solution for constructing interpretable user representations that are directly compatible with LLM-based systems.
Problem

Research questions and friction points this paper is trying to address.

user representation
heterogeneous sources
large language models
personalization
job platforms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
User Representation
Heterogeneous Text Sources
Large Language Models
Interpretable Representation
🔎 Similar Papers
No similar papers found.