Strategic Classification with Non-Linear Classifiers

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses strategic feature manipulation—where users incur costs to alter input features to improve classifier predictions—in strategic classification settings. We systematically investigate the learning dynamics and performance limits of nonlinear classifiers (e.g., neural networks) under such strategic behavior. Using game-theoretic modeling, decision boundary analysis, theoretical derivation for nonlinear models, and empirical evaluation, we establish that even universal approximators—such as deep neural networks—suffer significant expressive degradation under strategic manipulation: their classical universal approximation property fails in strategic environments. This reveals a fundamental tension between model capacity and strategic robustness in strategic machine learning. Crucially, we derive the first rigorous theoretical lower bound on the strategic robustness of nonlinear classifiers, thereby closing a critical theoretical gap beyond the linear-regime assumptions prevalent in prior work.

Technology Category

Application Category

📝 Abstract
In strategic classification, the standard supervised learning setting is extended to support the notion of strategic user behavior in the form of costly feature manipulations made in response to a classifier. While standard learning supports a broad range of model classes, the study of strategic classification has, so far, been dedicated mostly to linear classifiers. This work aims to expand the horizon by exploring how strategic behavior manifests under non-linear classifiers and what this implies for learning. We take a bottom-up approach showing how non-linearity affects decision boundary points, classifier expressivity, and model classes complexity. A key finding is that universal approximators (e.g., neural nets) are no longer universal once the environment is strategic. We demonstrate empirically how this can create performance gaps even on an unrestricted model class.
Problem

Research questions and friction points this paper is trying to address.

Explores strategic behavior under non-linear classifiers
Investigates impact on decision boundaries and model complexity
Shows universal approximators fail in strategic environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explores strategic behavior with non-linear classifiers
Analyzes decision boundary and classifier expressivity
Shows universal approximators fail in strategic environments
🔎 Similar Papers
No similar papers found.
B
Benyamin Trachtenberg
Technion – Israel Institute of Technology, Haifa, Israel
Nir Rosenfeld
Nir Rosenfeld
Technion
Machine LearningHuman BehaviorStrategic ClassificationPerformative Prediction