A Mathematical Framework for Custom Reward Functions in Job Application Evaluation using Reinforcement Learning

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional applicant tracking systems (ATS) rely on rigid keyword matching, often rejecting qualified candidates due to subtle semantic discrepancies. To address this, we propose a two-stage fine-tuning framework: first, supervised fine-tuning (SFT) establishes a small language model baseline; second, policy optimization is performed via Generalized Reward Policy Optimization (GRPO), guided by a custom multi-component reward function designed to mitigate reward hacking and enable robust, “gentle polishing” training. This approach eliminates explicit keyword dependency, enhancing semantic understanding and generalization. On an unseen test set, our model achieves 91% accuracy, with 0.85 recall and 1.0 precision for the SELECTED class—substantially outperforming both conventional ATS and naive reinforcement learning baselines. Our core contributions are (i) an interpretable, noise-robust resume evaluation paradigm, and (ii) a stable, sample-efficient RL fine-tuning mechanism.

Technology Category

Application Category

📝 Abstract
Conventional Applicant Tracking Systems (ATS) tend to be inflexible keyword-matchers, and deny gifted candidates a role due to a few minor semantic mismatches. This article describes a new two-step process to design a more refined resume evaluation model based on a small language model (<600M parameters) that is finetuned using GRPO on a custom reward function. To begin with, Supervised Fine-Tuning (SFT) was used to build a solid baseline model. Second, this SFT model was also optimized with the help of Reinforcement Learning (RL) through GRPO under the guidance of a new, multi-component reward function that can holistically assess candidates beyond simple keyword matching. We indicate that the RL application presents a critical problem of reward hacking due to the initial experiments of aggressive penalties, which produces faulty, excessively negative model behaviors. We have overcome this challenge by refining the reward function repeatedly and training hyperparameters into a stable "gentle polishing process" of the reward function. Our resulting GRPO-polished model demonstrates significant real-world efficacy, achieving a final accuracy of 91% on unseen test data. The model shows a strong ability to correctly identify qualified candidates (recall of 0.85 for the 'SELECTED' class) while also showing exceptional precision (1.0), confirming its reliability. These results indicate that a properly executed, two-step fine-tuning procedure can indeed effectively refine a small language model to be able to conduct fine-tuned and human-like candidate scoring, overcoming the drawbacks of both traditional ATS and naive RL usage.
Problem

Research questions and friction points this paper is trying to address.

Developing flexible resume evaluation beyond rigid keyword matching
Overcoming reward hacking in reinforcement learning for candidate assessment
Creating human-like scoring using small language models with custom rewards
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned small language model using GRPO
Multi-component reward function for holistic evaluation
Two-step fine-tuning process for stable optimization
Shreyansh Jain
Shreyansh Jain
MTech@IIITD
Computer VisionDeep Learning
M
Madhav Singhvi
Halıcıoğlu Data Science Institute, University of California San Diego, San Diego, United States of America
S
Shreya Rahul Jain
Department of Computer Science and Engineering, SRM Institute of Science and Technology, Ramapuram, Chennai, India
P
Pranav S
Department of Computer Science and Engineering, Sastra University, Thirumalaisamudram, Thanjavur, India
D
Dishaa Lokesh
Department of Computer Science and Engineering, Sastra University, Thirumalaisamudram, Thanjavur, India
N
Naren Chittibabu
Department of Computer Science and Engineering, Sastra University, Thirumalaisamudram, Thanjavur, India
A
Akash Anandhan
Department of Computer Science and Engineering, Sastra University, Thirumalaisamudram, Thanjavur, India