A Forced-Choice Neural Cognitive Diagnostic Model of Personality Testing

📅 2025-07-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Forced-choice (FC) personality assessments suffer from severe response biases, and conventional item response theory (IRT) and cognitive diagnosis models (CDMs) struggle to simultaneously satisfy item unidimensionality, accommodate multi-item blocks, and ensure parameter interpretability. Method: We propose the Forced-Choice Neural Cognitive Diagnosis Model (FC-NCDM), a deep learning framework that employs multilayer neural networks to model nonlinear interactions between latent traits and item features, uniformly handles three common FC block types, enforces monotonicity constraints for parameter interpretability, and explicitly incorporates the unidimensionality assumption per item. Contribution/Results: Extensive simulations and real-data experiments demonstrate that FC-NCDM substantially improves measurement accuracy and robustness against response biases. It yields interpretable individual trait profiles and diagnostic item parameters—overcoming fundamental limitations of traditional IRT and CDM approaches in FC settings.

Technology Category

Application Category

📝 Abstract
In the smart era, psychometric tests are becoming increasingly important for personnel selection, career development, and mental health assessment. Forced-choice tests are common in personality assessments because they require participants to select from closely related options, lowering the risk of response distortion. This study presents a deep learning-based Forced-Choice Neural Cognitive Diagnostic Model (FCNCD) that overcomes the limitations of traditional models and is applicable to the three most common item block types found in forced-choice tests. To account for the unidimensionality of items in forced-choice tests, we create interpretable participant and item parameters. We model the interactions between participant and item features using multilayer neural networks after mining them using nonlinear mapping. In addition, we use the monotonicity assumption to improve the interpretability of the diagnostic results. The FCNCD's effectiveness is validated by experiments on real-world and simulated datasets that show its accuracy, interpretability, and robustness.
Problem

Research questions and friction points this paper is trying to address.

Develops a neural model for forced-choice personality tests
Improves interpretability of test results using neural networks
Validates model accuracy on real and simulated datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep learning-based Forced-Choice Neural Cognitive Diagnostic Model
Interpretable participant and item parameters
Monotonicity assumption enhances interpretability
🔎 Similar Papers
No similar papers found.
X
Xiaoyu Li
School of Education, Yangzhou University
J
Jin Wu
Lab of Artificial Intelligence for Education, East China Normal University; Shanghai Institute of Artificial Intelligence for Education, East China Normal University; School of Computer Science and Technology, East China Normal University
Shaoyang Guo
Shaoyang Guo
Peking University
PhysicsAI
Haoran Shi
Haoran Shi
Peking University; Carnegie Mellon University; Amazon
Machine LearningNatural Language Processing
Chanjin Zheng
Chanjin Zheng
East China Normal University
educational measurementpsychometricsapplied statistics