Collective dynamics of strategic classification

๐Ÿ“… 2025-08-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper investigates the feedback loop between strategic user adaptation and algorithmic retraining in high-stakes domains (e.g., credit, healthcare). It addresses rising societal costs arising when users over-adapt to or deceive non-robust classifiers. We propose a dynamic modeling framework grounded in evolutionary game theory, integrating strategic classification, deception detection, algorithmic redress mechanisms, and institutional response-speed analysis. Our key contributions are: (i) perfect deception detection incentivizes genuine behavioral improvement; (ii) algorithmic redress significantly increases the rate of constructive user adaptation; and (iii) institutional adjustment speed governs system equilibrium structureโ€”under strong regulatory oversight, periodic dynamics emerge. These findings provide theoretical foundations and actionable intervention pathways for designing robust, fair, and behaviorally guiding AI decision systems.

Technology Category

Application Category

๐Ÿ“ Abstract
Classification algorithms based on Artificial Intelligence (AI) are nowadays applied in high-stakes decisions in finance, healthcare, criminal justice, or education. Individuals can strategically adapt to the information gathered about classifiers, which in turn may require algorithms to be re-trained. Which collective dynamics will result from users' adaptation and algorithms' retraining? We apply evolutionary game theory to address this question. Our framework provides a mathematically rigorous way of treating the problem of feedback loops between collectives of users and institutions, allowing to test interventions to mitigate the adverse effects of strategic adaptation. As a case study, we consider institutions deploying algorithms for credit lending. We consider several scenarios, each representing different interaction paradigms. When algorithms are not robust against strategic manipulation, we are able to capture previous challenges discussed in the strategic classification literature, whereby users either pay excessive costs to meet the institutions' expectations (leading to high social costs) or game the algorithm (e.g., provide fake information). From this baseline setting, we test the role of improving gaming detection and providing algorithmic recourse. We show that increased detection capabilities reduce social costs and could lead to users' improvement; when perfect classifiers are not feasible (likely to occur in practice), algorithmic recourse can steer the dynamics towards high users' improvement rates. The speed at which the institutions re-adapt to the user's population plays a role in the final outcome. Finally, we explore a scenario where strict institutions provide actionable recourse to their unsuccessful users and observe cycling dynamics so far unnoticed in the literature.
Problem

Research questions and friction points this paper is trying to address.

Study feedback loops between users and AI classifiers in strategic classification
Analyze collective dynamics from user adaptation and algorithm retraining
Test interventions like gaming detection and recourse to reduce social costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evolutionary game theory models strategic adaptation dynamics
Algorithmic recourse steers users towards improvement
Enhanced detection reduces social gaming costs
๐Ÿ”Ž Similar Papers
No similar papers found.
M
Marta C. Couto
Informatics Institute, University of Amsterdam, The Netherlands
F
Flavia Barsotti
ING Analytics, ING Group, Amsterdam, The Netherlands; Delft Institute of Applied Mathematics (DIAM), TU Delft, The Netherlands
Fernando P. Santos
Fernando P. Santos
Informatics Institute (IvI), University of Amsterdam
multiagent systemscomplex systemsevolutionary game theorynetwork sciencealgorithmic fairness