๐ค AI Summary
In recruitment scenarios, large language models (LLMs) suffer from poor domain adaptability, unstructured outputs, high inference latency, and difficulties in online deployment for jobโcandidate matching and explanation generation.
Method: We propose a scalable knowledge distillation framework tailored to recruitment, featuring a multi-objective dual-branch architecture that integrates data-level and logit-level distillation. Our approach combines an encoder-decoder structure, post-training optimization, and prompt engineering to efficiently transfer knowledge from a black-box teacher model.
Contribution/Results: Experiments demonstrate that our method preserves evaluation accuracy while substantially improving inference efficiency. Online A/B testing shows a 0.24% increase in applicant conversion rate and a 0.28% rise in qualified applications. The framework provides a reusable technical pathway for lightweight LLM deployment in vertical domains.
๐ Abstract
Large language models (LLMs) have achieved strong performance across a wide range of natural language processing tasks. However, deploying LLMs at scale for domain specific applications, such as job-person fit and explanation in job seeking platforms, introduces distinct challenges. At LinkedIn, the job person fit task requires analyzing a candidate's public profile against job requirements to produce both a fit assessment and a detailed explanation. Directly applying open source or finetuned LLMs to this task often fails to yield high quality, actionable feedback due to the complexity of the domain and the need for structured outputs. Moreover, the large size of these models leads to high inference latency and limits scalability, making them unsuitable for online use. To address these challenges, we introduce LANTERN, a novel LLM knowledge distillation framework tailored specifically for job person fit tasks. LANTERN involves modeling over multiple objectives, an encoder model for classification purpose, and a decoder model for explanation purpose. To better distill the knowledge from a strong black box teacher model to multiple downstream models, LANTERN incorporates multi level knowledge distillation that integrates both data and logit level insights. In addition to introducing the knowledge distillation framework, we share our insights on post training techniques and prompt engineering, both of which are crucial for successfully adapting LLMs to domain specific downstream tasks. Extensive experimental results demonstrate that LANTERN significantly improves task specific metrics for both job person fit and explanation. Online evaluations further confirm its effectiveness, showing measurable gains in job seeker engagement, including a 0.24% increase in apply rate and a 0.28% increase in qualified applications.