🤖 AI Summary
This work proposes a re-ranking model based on a delayed cross-attention architecture to address inefficiency and historical bias in matching long, multilingual, structured resumes with job descriptions. The approach efficiently models long-range contextual dependencies by decomposing resumes and job briefs, and leverages large language models to generate fine-grained semantic supervision signals for knowledge distillation. An enhanced distillation loss function is introduced to improve matching consistency and interpretability. Experimental results demonstrate that the proposed model significantly outperforms state-of-the-art methods in relevance, ranking quality, and calibration performance, thereby enhancing the accuracy and reliability of person-job matching.
📝 Abstract
Finding the most relevant person for a job proposal in real time is challenging, especially when resumes are long, structured, and multilingual. In this paper, we propose a re-ranking model based on a new generation of late cross-attention architecture, that decomposes both resumes and project briefs to efficiently handle long-context inputs with minimal computational overhead. To mitigate historical data biases, we use a generative large language model (LLM) as a teacher, generating fine-grained, semantically grounded supervision. This signal is distilled into our student model via an enriched distillation loss function. The resulting model produces skill-fit scores that enable consistent and interpretable person-job matching. Experiments on relevance, ranking, and calibration metrics demonstrate that our approach outperforms state-of-the-art baselines.