🤖 AI Summary
Existing benchmarks for person–job matching struggle to diagnose model failures in complex scenarios such as skill transfer and competency-based reasoning. To address this limitation, this work introduces a retrieval evaluation benchmark grounded in real-world hiring contexts, where full job descriptions serve as queries and complete resumes as documents. Relevance is annotated based on job competency requirements, and—critically—the benchmark incorporates industry domain and reasoning type as fine-grained diagnostic dimensions, shifting evaluation paradigms from aggregate performance metrics toward capability profiling. Experiments on a large-scale, multi-domain dataset constructed from real hiring data reveal that performance disparities across industries substantially outweigh gains from architectural improvements; while re-ranking consistently enhances effectiveness, its combination with query understanding unexpectedly degrades performance.
📝 Abstract
As retrieval models converge on generic benchmarks, the pressing question is no longer "who scores higher" but rather "where do systems fail, and why?" Person-job matching is a domain that urgently demands such diagnostic capability -- it requires systems not only to verify explicit constraints but also to perform skill-transfer inference and job-competency reasoning, yet existing benchmarks provide no systematic diagnostic support for this task. We introduce PJB (Person-Job Benchmark), a reasoning-aware retrieval evaluation dataset that uses complete job descriptions as queries and complete resumes as documents, defines relevance through job-competency judgment, is grounded in real-world recruitment data spanning six industry domains and nearly 200,000 resumes, and upgrades evaluation from "who scores higher" to "where do systems differ, and why" through domain-family and reasoning-type diagnostic labels. Diagnostic experiments using dense retrieval reveal that performance heterogeneity across industry domains far exceeds the gains from module upgrades for the same model, indicating that aggregate scores alone can severely mislead optimization decisions. At the module level, reranking yields stable improvements while query understanding not only fails to help but actually degrades overall performance when combined with reranking -- the two modules face fundamentally different improvement bottlenecks. The value of PJB lies not in yet another leaderboard of average scores, but in providing recruitment retrieval systems with a capability map that pinpoints where to invest.