π€ AI Summary
This work addresses the risk of concept-level inference attacks against deep learning APIs operating as black-box, functionally anonymous servicesβi.e., without domain-specific semantic cues. Existing attacks rely on explicit domain knowledge, limiting their applicability. To overcome this, we propose Adaptive Domain Inference (ADI), the first attack framework featuring a concept-level Bayesian probabilistic adaptation mechanism that identifies the conceptual subset underlying training data using minimal API queries. We further design a lightweight baseline attack, LDI, integrating statistical hypothesis testing with model inversion to achieve rapid convergence under minimal query budgets. Experiments demonstrate that ADI significantly improves inference accuracy, enabling high-confidence recovery of semantically coherent, concept-level training data distributions from black-box APIs. Our results constitute the first empirical validation of the feasibility and effectiveness of concept-level reverse-engineering attacks under severely constrained, minimalist API interfaces.
π Abstract
With increasingly deployed deep neural networks in sensitive application domains, such as healthcare and security, it's essential to understand what kind of sensitive information can be inferred from these models. Most known model-targeted attacks assume attackers have learned the application domain or training data distribution to ensure successful attacks. Can removing the domain information from model APIs protect models from these attacks? This paper studies this critical problem. Unfortunately, even with minimal knowledge, i.e., accessing the model as an unnamed function without leaking the meaning of input and output, the proposed adaptive domain inference attack (ADI) can still successfully estimate relevant subsets of training data. We show that the extracted relevant data can significantly improve, for instance, the performance of model-inversion attacks. Specifically, the ADI method utilizes a concept hierarchy extracted from a collection of available public and private datasets and a novel algorithm to adaptively tune the likelihood of leaf concepts showing up in the unseen training data. We also designed a straightforward hypothesis-testing-based attack -- LDI. The ADI attack not only extracts partial training data at the concept level but also converges fastest and requires the fewest target-model accesses among all candidate methods. Our code is available at url{https://anonymous.4open.science/r/KDD-362D}.