đ€ AI Summary
This paper uncovers a fundamental tension between interpretability and model security in Machine Learning as a Service (MLaaS): interpretability techniquesâparticularly counterfactual explanationsâsubstantially exacerbate model extraction vulnerabilities. To address this, we introduce the first *competitive analytical framework* tailored to tree-based models (including decision trees, random forests, and gradient-boosted trees), formally characterizing the minimal oracle query complexity required for exact model reconstruction and deriving its tight upper bound. We further propose a novel reconstruction algorithm guaranteeing *perfect fidelity* and *strong anytime performance*, achieving zero-error model recovery. Extensive experiments on multiple benchmarks demonstrate the algorithmâs efficiency and practicality. For the first time, our work establishesâboth theoretically and empiricallyâthat offering interpretability services can intrinsically undermine model security.
đ Abstract
The advent of Machine Learning as a Service (MLaaS) has heightened the trade-off between model explainability and security. In particular, explainability techniques, such as counterfactual explanations, inadvertently increase the risk of model extraction attacks, enabling unauthorized replication of proprietary models. In this paper, we formalize and characterize the risks and inherent complexity of model reconstruction, focusing on the"oracle'' queries required for faithfully inferring the underlying prediction function. We present the first formal analysis of model extraction attacks through the lens of competitive analysis, establishing a foundational framework to evaluate their efficiency. Focusing on models based on additive decision trees (e.g., decision trees, gradient boosting, and random forests), we introduce novel reconstruction algorithms that achieve provably perfect fidelity while demonstrating strong anytime performance. Our framework provides theoretical bounds on the query complexity for extracting tree-based model, offering new insights into the security vulnerabilities of their deployment.