P-MIA: A Profiled-Based Membership Inference Attack on Cognitive Diagnosis Models

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cognitive diagnosis models (CDMs) enable fine-grained learner profiling in intelligent education, yet their training relies on sensitive student data, and associated privacy risks remain unassessed. This paper presents the first systematic study of membership inference attacks (MIAs) against CDMs. We propose a gray-box MIA framework leveraging knowledge state vectors and prediction probabilities—exploiting model interpretability interfaces (e.g., visual radar charts) and internal representations to achieve multimodal feature fusion and invertible knowledge state reconstruction. Evaluated on three real-world educational datasets, our attack significantly outperforms conventional black-box baselines, improving inference accuracy by 12.7%–28.3%. Beyond quantifying membership leakage risks inherent to CDMs, we further demonstrate their utility as an auditing tool for machine unlearning: our analysis reveals critical vulnerabilities in existing unlearning mechanisms, particularly their failure to adequately protect knowledge state information.

Technology Category

Application Category

📝 Abstract
Cognitive diagnosis models (CDMs) are pivotal for creating fine-grained learner profiles in modern intelligent education platforms. However, these models are trained on sensitive student data, raising significant privacy concerns. While membership inference attacks (MIA) have been studied in various domains, their application to CDMs remains a critical research gap, leaving their privacy risks unquantified. This paper is the first to systematically investigate MIA against CDMs. We introduce a novel and realistic grey box threat model that exploits the explainability features of these platforms, where a model's internal knowledge state vectors are exposed to users through visualizations such as radar charts. We demonstrate that these vectors can be accurately reverse-engineered from such visualizations, creating a potent attack surface. Based on this threat model, we propose a profile-based MIA (P-MIA) framework that leverages both the model's final prediction probabilities and the exposed internal knowledge state vectors as features. Extensive experiments on three real-world datasets against mainstream CDMs show that our grey-box attack significantly outperforms standard black-box baselines. Furthermore, we showcase the utility of P-MIA as an auditing tool by successfully evaluating the efficacy of machine unlearning techniques and revealing their limitations.
Problem

Research questions and friction points this paper is trying to address.

Investigating membership inference attacks on cognitive diagnosis models
Exploiting explainability features to reverse-engineer knowledge state vectors
Proposing profile-based attack framework to quantify privacy risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages model prediction probabilities for attack
Uses exposed knowledge state vectors as features
Reverse-engineers vectors from explainability visualizations
🔎 Similar Papers
No similar papers found.