Can't Hide Behind the API: Stealing Black-Box Commercial Embedding Models

πŸ“… 2024-06-13
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Commercial black-box text embedding models deployed via APIs are vulnerable to model extraction attacks through reverse engineering of API-call data. Method: We propose a low-cost, high-fidelity model extraction approach that leverages returned text-vector pairs to train a compact student model via a novel multi-teacher knowledge distillation framework, jointly optimizing teacher ensemble integration and embedding dimensionality reduction (down to 256 dimensions), while incorporating a retrieval-performance alignment loss to preserve downstream effectiveness. Contribution/Results: Our method achieves over 95% of the original commercial model’s NDCG@10 on benchmarks including MSMARCO, at a total cost under $300. This work exposes a previously underappreciated intellectual property risk for API-deployed embedding models and establishes a reproducible technical baseline for both model protection and security evaluation.

Technology Category

Application Category

πŸ“ Abstract
Embedding models that generate dense vector representations of text are widely used and hold significant commercial value. Companies such as OpenAI and Cohere offer proprietary embedding models via paid APIs, but despite being"hidden"behind APIs, these models are not protected from theft. We present, to our knowledge, the first effort to"steal"these models for retrieval by training thief models on text-embedding pairs obtained from the APIs. Our experiments demonstrate that it is possible to replicate the retrieval effectiveness of commercial embedding models with a cost of under $300. Notably, our methods allow for distilling from multiple teachers into a single robust student model, and for distilling into presumably smaller models with fewer dimension vectors, yet competitive retrieval effectiveness. Our findings raise important considerations for deploying commercial embedding models and suggest measures to mitigate the risk of model theft.
Problem

Research questions and friction points this paper is trying to address.

Stealing proprietary black-box embedding models via APIs
Replicating commercial model effectiveness with low cost
Distilling multiple models into smaller competitive versions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stealing black-box embedding models via API queries
Distilling multiple teachers into one student model
Creating smaller yet competitive embedding models