Facilitating Multi-Role and Multi-Behavior Collaboration of Large Language Models for Online Job Seeking and Recruiting

📅 2024-05-28
🏛️ arXiv.org
📈 Citations: 8
Influential: 0
📄 PDF
🤖 AI Summary
In online recruitment, conventional job–candidate matching methods relying solely on resume and job description texts lack dynamic interaction evidence, limiting matching accuracy. To address this, we propose MockLLM, the first framework employing a “handshake protocol”–driven dual-module collaboration paradigm: large language models (LLMs) are instantiated as interviewer and candidate agents to conduct simulated interviews, forming a unified multi-role, multi-behavior agent architecture. The framework integrates reflective memory generation, dynamic prompt optimization, and bilateral joint evaluation modeling. This enables the generation of high-quality, trustworthy, and bidirectionally aligned interactive dialogues, providing strong complementary evidence for job–candidate matching. Experiments demonstrate that MockLLM achieves state-of-the-art performance across multiple matching tasks, significantly outperforming text-only baseline methods.

Technology Category

Application Category

📝 Abstract
The emergence of online recruitment services has revolutionized the traditional landscape of job seeking and recruitment, necessitating the development of high-quality industrial applications to improve person-job fitting. Existing methods generally rely on modeling the latent semantics of resumes and job descriptions and learning a matching function between them. Inspired by the powerful role-playing capabilities of Large Language Models (LLMs), we propose to introduce a mock interview process between LLM-played interviewers and candidates. The mock interview conversations can provide additional evidence for candidate evaluation, thereby augmenting traditional person-job fitting based solely on resumes and job descriptions. However, characterizing these two roles in online recruitment still presents several challenges, such as developing the skills to raise interview questions, formulating appropriate answers, and evaluating two-sided fitness. To this end, we propose MockLLM, a novel applicable framework that divides the person-job matching process into two modules: mock interview generation and two-sided evaluation in handshake protocol, jointly enhancing their performance through collaborative behaviors between interviewers and candidates. We design a role-playing framework as a multi-role and multi-behavior paradigm to enable a single LLM agent to effectively behave with multiple functions for both parties. Moreover, we propose reflection memory generation and dynamic prompt modification techniques to refine the behaviors of both sides, enabling continuous optimization of the augmented additional evidence. Extensive experimental results show that MockLLM can achieve the best performance on person-job matching accompanied by high mock interview quality, envisioning its emerging application in real online recruitment in the future.
Problem

Research questions and friction points this paper is trying to address.

Enhancing dynamic person-job matching in online recruitment
Simulating adaptive role-based dialogues for interviews
Improving matching accuracy and scalability in job domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulates interviewer and candidate roles dynamically
Uses reflection memory for behavior refinement
Evaluates interactions via handshake protocol
🔎 Similar Papers
No similar papers found.