CoMAI: A Collaborative Multi-Agent Framework for Robust and Equitable Interview Evaluation

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of robustness and fairness in AI-driven interview evaluation by proposing the first multi-agent collaborative framework that integrates multi-layered security defenses, adaptive difficulty adjustment, and rubric-based scoring. The architecture employs four specialized agents—responsible for question generation, safety enforcement, scoring, and summarization—orchestrated through a centralized finite-state machine to enable modular task decomposition. This design ensures a secure, multidimensional, and structured assessment process. Experimental results demonstrate that the system achieves strong performance in accuracy (90.47%), recall (83.33%), and candidate satisfaction (84.41%), while significantly mitigating subjective bias, thereby validating both the efficacy of the approach and its acceptability to users.

Technology Category

Application Category

📝 Abstract
Ensuring robust and fair interview assessment remains a key challenge in AI-driven evaluation. This paper presents CoMAI, a general-purpose multi-agent interview framework designed for diverse assessment scenarios. In contrast to monolithic single-agent systems based on large language models (LLMs), CoMAI employs a modular task-decomposition architecture coordinated through a centralized finite-state machine. The system comprises four agents specialized in question generation, security, scoring, and summarization. These agents work collaboratively to provide multi-layered security defenses against prompt injection, support multidimensional evaluation with adaptive difficulty adjustment, and enable rubric-based structured scoring that reduces subjective bias. Experimental results demonstrate that CoMAI achieved 90.47% accuracy, 83.33% recall, and 84.41% candidate satisfaction. These results highlight CoMAI as a robust, fair, and interpretable paradigm for AI-driven interview assessment.
Problem

Research questions and friction points this paper is trying to address.

interview evaluation
fairness
robustness
subjective bias
prompt injection
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-agent framework
modular task decomposition
prompt injection defense
rubric-based scoring
adaptive difficulty adjustment
🔎 Similar Papers
No similar papers found.
G
Gengxin Sun
Shandong University
R
Ruihao Yu
Shandong University
L
Liangyi Yin
Shandong University
Y
Yunqi Yang
Shandong University
Bin Zhang
Bin Zhang
Institute of Automation,Chinese Academy of Sciences
AI AgentMulti-agent SystemReinforcement Learning
Zhiwei Xu
Zhiwei Xu
Shandong University
Reinforcement LearningMulti-Agent SystemLLM-based Agent