🤖 AI Summary
This paper addresses the risk assessment of model functionality leakage—specifically, Model Extraction Attacks (MEAs)—in machine learning web applications, proposing the first attack-agnostic theoretical framework. Methodologically, it leverages Neural Tangent Kernel (NTK) theory to formalize linearized MEAs as regularized kernel classification problems, deriving fidelity-gap bounds and generalization error bounds, and introducing Model Recovery Complexity (MRC) as a novel quantitative risk metric. Theoretically, it establishes a strong positive correlation between victim model accuracy and extraction risk. Empirical evaluation across 16 model architectures and 5 benchmark datasets demonstrates high consistency between MRC and actual extraction success rates. Furthermore, the proposed tool, MER-Inspector, achieves up to 89.58% accuracy in ranking the relative extraction risk of arbitrary model pairs. This work establishes an interpretable, theoretically grounded, and quantifiable paradigm for assessing security risks in ML-as-a-service deployments.
📝 Abstract
Information leakage issues in machine learning-based Web applications have attracted increasing attention. While the risk of data privacy leakage has been rigorously analyzed, the theory of model function leakage, known as Model Extraction Attacks (MEAs), has not been well studied. In this paper, we are the first to understand MEAs theoretically from an attack-agnostic perspective and to propose analytical metrics for evaluating model extraction risks. By using the Neural Tangent Kernel (NTK) theory, we formulate the linearized MEA as a regularized kernel classification problem and then derive the fidelity gap and generalization error bounds of the attack performance. Based on these theoretical analyses, we propose a new theoretical metric called Model Recovery Complexity (MRC), which measures the distance of weight changes between the victim and surrogate models to quantify risk. Additionally, we find that victim model accuracy, which shows a strong positive correlation with model extraction risk, can serve as an empirical metric. By integrating these two metrics, we propose a framework, namely Model Extraction Risk Inspector (MER-Inspector), to compare the extraction risks of models under different model architectures by utilizing relative metric values. We conduct extensive experiments on 16 model architectures and 5 datasets. The experimental results demonstrate that the proposed metrics have a high correlation with model extraction risks, and MER-Inspector can accurately compare the extraction risks of any two models with up to 89.58%.