Towards Public Administration Research Based on Interpretable Machine Learning

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the longstanding emphasis on causal inference over predictive modeling in quantitative public administration research, which has constrained theoretical advancement and inferential reliability. To bridge this gap, the project pioneers a systematic integration of interpretable machine learning with established public administration research paradigms. By constructing domain-specific datasets, training and evaluating predictive models, and applying state-of-the-art explanation techniques, the approach enables synergistic prediction and causal inference. This methodology not only enhances the credibility and generalizability of quantitative analyses but also offers a novel methodological pathway for prioritizing explanatory phenomena, generating theoretical hypotheses, and facilitating knowledge translation. Ultimately, it advances the adoption of predictive modeling within social science research.

Technology Category

Application Category

📝 Abstract
Causal relationships play a pivotal role in research within the field of public administration. Ensuring reliable causal inference requires validating the predictability of these relationships, which is a crucial precondition. However, prediction has not garnered adequate attention within the realm of quantitative research in public administration and the broader social sciences. The advent of interpretable machine learning presents a significant opportunity to integrate prediction into quantitative research conducted in public administration. This article delves into the fundamental principles of interpretable machine learning while also examining its current applications in social science research. Building upon this foundation, the article further expounds upon the implementation process of interpretable machine learning, encompassing key aspects such as dataset construction, model training, model evaluation, and model interpretation. Lastly, the article explores the disciplinary value of interpretable machine learning within the field of public administration, highlighting its potential to enhance the generalization of inference, facilitate the selection of optimal explanations for phenomena, stimulate the construction of theoretical hypotheses, and provide a platform for the translation of knowledge. As a complement to traditional causal inference methods, interpretable machine learning ushers in a new era of credibility in quantitative research within the realm of public administration.
Problem

Research questions and friction points this paper is trying to address.

public administration
causal inference
prediction
quantitative research
interpretable machine learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

interpretable machine learning
causal inference
public administration
predictive modeling
model interpretation
🔎 Similar Papers
No similar papers found.
Zhanyu Liu
Zhanyu Liu
Shanghai Jiao Tong University
Recommendation SystemLarge Language ModelData MiningTime Series Analysis
Y
Yang Yu
School of Public Administration, Hunan University, Changsha, 410012, China