🤖 AI Summary
Strategic classification (SC) models how agents strategically manipulate features to obtain favorable classification outcomes. However, existing approaches—relying on linear or shallow models—exhibit limited generalizability and scalability in large-scale real-world settings (e.g., finance and internet platforms). This paper introduces the first LLM-based SC framework, GLIM, which is gradient-free and requires no fine-tuning. GLIM leverages in-context learning and implicit self-attention to model the bi-level strategic interaction between feature manipulation and classifier adaptation directly during forward inference. Theoretically, GLIM enjoys provably bounded generalization error. Empirically, it significantly outperforms state-of-the-art baselines on both synthetic and real-world datasets, demonstrating robustness, computational efficiency, and strong adaptability to dynamic environments. By unifying strategic reasoning and classification within a single LLM-driven inference process, GLIM establishes a novel paradigm for scalable, principled strategic classification.
📝 Abstract
Strategic classification~(SC) explores how individuals or entities modify their features strategically to achieve favorable classification outcomes. However, existing SC methods, which are largely based on linear models or shallow neural networks, face significant limitations in terms of scalability and capacity when applied to real-world datasets with significantly increasing scale, especially in financial services and the internet sector. In this paper, we investigate how to leverage large language models to design a more scalable and efficient SC framework, especially in the case of growing individuals engaged with decision-making processes. Specifically, we introduce GLIM, a gradient-free SC method grounded in in-context learning. During the feed-forward process of self-attention, GLIM implicitly simulates the typical bi-level optimization process of SC, including both the feature manipulation and decision rule optimization. Without fine-tuning the LLMs, our proposed GLIM enjoys the advantage of cost-effective adaptation in dynamic strategic environments. Theoretically, we prove GLIM can support pre-trained LLMs to adapt to a broad range of strategic manipulations. We validate our approach through experiments with a collection of pre-trained LLMs on real-world and synthetic datasets in financial and internet domains, demonstrating that our GLIM exhibits both robustness and efficiency, and offering an effective solution for large-scale SC tasks.