🤖 AI Summary
This work addresses limitations in existing large language model (LLM)-based talent recommendation systems, which predominantly adopt pointwise paradigms that fail to capture inter-candidate relationships and suffer from position bias and intermediate information loss, leading to suboptimal performance and high computational costs. To overcome these issues, the authors propose L3TR, a listwise recommendation framework that implicitly leverages the latent outputs of LLMs to enhance both accuracy and efficiency. L3TR introduces a novel block attention mechanism combined with local positional encoding to mitigate position and concurrent token biases, employs an ID sampling strategy to align candidate set sizes between training and inference, and incorporates a training-free debiased evaluation method. Experiments on two real-world datasets demonstrate that L3TR significantly outperforms current baselines, achieving superior recommendation performance while substantially reducing token consumption.
📝 Abstract
Talent recruitment is a critical, yet costly process for many industries, with high recruitment costs and long hiring cycles. Existing talent recommendation systems increasingly adopt large language models (LLMs) due to their remarkable language understanding capabilities. However, most prior approaches follow a pointwise paradigm, which requires LLMs to repeatedly process some text and fails to capture the relationships among candidates in the list, resulting in higher token consumption and suboptimal recommendations. Besides, LLMs exhibit position bias and the lost-in-the-middle issue when answering multiple-choice questions and processing multiple long documents. To address these issues, we introduce an implicit strategy to utilize LLM's potential output for the recommendation task and propose L3TR, a novel framework for listwise talent recommendation with LLMs. In this framework, we propose a block attention mechanism and a local positional encoding method to enhance inter-document processing and mitigate the position bias and concurrent token bias issue. We also introduce an ID sampling method for resolving the inconsistency between candidate set sizes in the training phase and the inference phase. We design evaluation methods to detect position bias and token bias and training-free debiasing methods. Extensive experiments on two real-world datasets validated the effectiveness of L3TR, showing consistent improvements over existing baselines.