🤖 AI Summary
This paper addresses the problem of computing optimal responses in strategic classification, where classifier deployment induces strategic user behavior that shifts the data distribution. Existing methods are largely restricted to linear models and fail to yield tractable solutions for nonlinear classifiers. To overcome this limitation, we propose a novel framework based on Lagrangian dual optimization of a surrogate objective, unifying optimal-response computation with classifier training. Our approach exactly recovers known optimal solutions in the linear case and exposes theoretical shortcomings of several prior methods. For the first time, it enables scalable, differentiable estimation of optimal responses for nonlinear models—including kernel SVMs and neural networks. Experiments demonstrate that our method significantly improves model robustness and generalization under strategic manipulation, establishing the first general-purpose, practical paradigm for nonlinear strategic classification.
📝 Abstract
We consider the problem of strategic classification, where the act of deploying a classifier leads to strategic behaviour that induces a distribution shift on subsequent observations. Current approaches to learning classifiers in strategic settings are focused primarily on the linear setting, but in many cases non-linear classifiers are more suitable. A central limitation to progress for non-linear classifiers arises from the inability to compute best responses in these settings. We present a novel method for computing the best response by optimising the Lagrangian dual of the Agents' objective. We demonstrate that our method reproduces best responses in linear settings, identifying key weaknesses in existing approaches. We present further results demonstrating our method can be straight-forwardly applied to non-linear classifier settings, where it is useful for both evaluation and training.