🤖 AI Summary
To address performance degradation, fairness deterioration, and privacy compliance challenges in cross-institutional academic retention prediction under resource-constrained community college settings, this paper proposes a fair and privacy-preserving transfer learning framework. Methodologically, we design a demographic-similarity–guided sequential transfer training strategy and introduce a sensitive-group–adaptive threshold calibration mechanism, enabling model adaptation without local fine-tuning data. We further conduct the first systematic empirical validation of contextual factors—such as student intake composition and curriculum structure—as predictive indicators of transferability. Evaluated across seven institutions and over 800,000 students in multi-cohort experiments, our framework improves equal opportunity difference by 32% while maintaining baseline accuracy, thereby significantly alleviating the fairness–generalization trade-off inherent in cross-campus deployment.
📝 Abstract
Predictive analytics is widely used in learning analytics, but many resource-constrained institutions lack the capacity to develop their own models or rely on proprietary ones trained in different contexts with little transparency. Transfer learning holds promise for expanding equitable access to predictive analytics but remains underexplored due to legal and technical constraints. This paper examines transfer learning strategies for retention prediction at U.S. two-year community colleges. We envision a scenario where community colleges collaborate with each other and four-year universities to develop retention prediction models under privacy constraints and evaluate risks and improvement strategies of cross-institutional model transfer. Using administrative records from 4 research universities and 23 community colleges covering over 800,000 students across 7 cohorts, we identify performance and fairness degradation when external models are deployed locally without adaptation. Publicly available contextual information can forecast these performance drops and offer early guidance for model portability. For developers under privacy regulations, sequential training selecting institutions based on demographic similarities enhances fairness without compromising performance. For institutions lacking local data to fine-tune source models, customizing evaluation thresholds for sensitive groups outperforms standard transfer techniques in improving performance and fairness. Our findings suggest the value of transfer learning for more accessible educational predictive modeling and call for judicious use of contextual information in model training, selection, and deployment to achieve reliable and equitable model transfer.