Bridging Prediction and Intervention Problems in Social Systems

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automated Decision Systems (ADS) have long been confined to predictive paradigms, limiting their capacity to meaningfully intervene in complex socio-technical problems. Method: We propose an intervention-centered ADS design paradigm, grounded in a unified problem formulation framework that integrates causal inference, statistical estimation, and policy evaluation techniques to model the full causal chain—from prediction to decision to outcome. Contribution/Results: First, we establish a theoretical bridge from prediction to intervention, exposing the societal utility bottleneck of isolated predictive tasks. Second, we formalize intervention-oriented evaluation criteria and design principles for ADS. Third, we provide a methodological foundation enabling ADS deployment with enhanced fairness, interpretability, and real-world policy impact—shifting ADS from merely describing reality toward actively shaping improved outcomes. This paradigm advances the operationalization of responsible, socially impactful algorithmic decision-making.

Technology Category

Application Category

📝 Abstract
Many automated decision systems (ADS) are designed to solve prediction problems -- where the goal is to learn patterns from a sample of the population and apply them to individuals from the same population. In reality, these prediction systems operationalize holistic policy interventions in deployment. Once deployed, ADS can shape impacted population outcomes through an effective policy change in how decision-makers operate, while also being defined by past and present interactions between stakeholders and the limitations of existing organizational, as well as societal, infrastructure and context. In this work, we consider the ways in which we must shift from a prediction-focused paradigm to an interventionist paradigm when considering the impact of ADS within social systems. We argue this requires a new default problem setup for ADS beyond prediction, to instead consider predictions as decision support, final decisions, and outcomes. We highlight how this perspective unifies modern statistical frameworks and other tools to study the design, implementation, and evaluation of ADS systems, and point to the research directions necessary to operationalize this paradigm shift. Using these tools, we characterize the limitations of focusing on isolated prediction tasks, and lay the foundation for a more intervention-oriented approach to developing and deploying ADS.
Problem

Research questions and friction points this paper is trying to address.

Shift from prediction to intervention in automated decision systems
Unify statistical frameworks for ADS design and evaluation
Address limitations of isolated prediction tasks in social systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shift from prediction to intervention paradigm
Unify statistical frameworks for ADS design
Focus on decision support and outcomes
🔎 Similar Papers
No similar papers found.
Lydia T. Liu
Lydia T. Liu
Assistant Professor of Computer Science, Princeton University
Machine LearningStatisticsDecision MakingAlgorithmic Fairness
Inioluwa Deborah Raji
Inioluwa Deborah Raji
UC Berkeley
machine learningevaluationauditingalgorithmssociety
Angela Zhou
Angela Zhou
Assistant Professor at University of Southern California, Data Sciences and Operations and CompSci
operations research/operations managementmachine learningcausal inferencealgorithmic fairness
Luke Guerdan
Luke Guerdan
Carnegie Mellon University
MeasurementEvaluation ScienceMachine LearningResponsible AIHuman-Computer Interaction
Jessica Hullman
Jessica Hullman
Ginni Rometty Professor of Computer Science, Northwestern University
uncertainty quantificationAI for decision-makingmetasciencevisualization
Daniel Malinsky
Daniel Malinsky
Assistant Professor of Biostatistics at Columbia University
Causal InferenceGraphical ModelsMachine LearningPhilosophy of ScienceFoundations of
Bryan Wilder
Bryan Wilder
Assistant Professor of Machine Learning, Carnegie Mellon University
Artificial intelligenceoptimizationmachine learningsocial networks
S
Simone Zhang
H
Hammaad Adam
Amanda Coston
Amanda Coston
Microsoft
algorithmic fairnessmachine learningcausal inference
Ben Laufer
Ben Laufer
Cornell Tech
E
Ezinne Nwankwo
M
Michael Zanger-Tishler
Eli Ben-Michael
Eli Ben-Michael
Assistant Professor, Carnegie Mellon University
causal inferenceapplied statisticsquantitative social science
Solon Barocas
Solon Barocas
Microsoft Research; Cornell University
Avi Feller
Avi Feller
UC Berkeley
M
Marissa Gerchick
T
Talia Gillis
Shion Guha
Shion Guha
University of Toronto
Human-Centered Data SciencePublic Interest TechnologyResponsible AIAI Policy
D
Daniel Ho
L
Lily Hu
Kosuke Imai
Kosuke Imai
Professor of Government and of Statistics, Harvard University
applied statisticscausal inferencecomputational social sciencequantitative social sciencepolitical methodology
Sayash Kapoor
Sayash Kapoor
CS PhD, Princeton University
ReproducibilityAI agentsSocietal impacts
Joshua Loftus
Joshua Loftus
London School of Economics
StatisticsStatistical MethodologyMachine LearningArtificial Intelligence
Razieh Nabi
Razieh Nabi
Rollins Assistant Professor of Biostatistics, Emory University
Causal InferenceMissing DataAlgorithmic FairnessGraphical ModelsSemiparametric Statistics