π€ AI Summary
Commercial black-box text embedding models deployed via APIs are vulnerable to model extraction attacks through reverse engineering of API-call data. Method: We propose a low-cost, high-fidelity model extraction approach that leverages returned text-vector pairs to train a compact student model via a novel multi-teacher knowledge distillation framework, jointly optimizing teacher ensemble integration and embedding dimensionality reduction (down to 256 dimensions), while incorporating a retrieval-performance alignment loss to preserve downstream effectiveness. Contribution/Results: Our method achieves over 95% of the original commercial modelβs NDCG@10 on benchmarks including MSMARCO, at a total cost under $300. This work exposes a previously underappreciated intellectual property risk for API-deployed embedding models and establishes a reproducible technical baseline for both model protection and security evaluation.
π Abstract
Embedding models that generate dense vector representations of text are widely used and hold significant commercial value. Companies such as OpenAI and Cohere offer proprietary embedding models via paid APIs, but despite being"hidden"behind APIs, these models are not protected from theft. We present, to our knowledge, the first effort to"steal"these models for retrieval by training thief models on text-embedding pairs obtained from the APIs. Our experiments demonstrate that it is possible to replicate the retrieval effectiveness of commercial embedding models with a cost of under $300. Notably, our methods allow for distilling from multiple teachers into a single robust student model, and for distilling into presumably smaller models with fewer dimension vectors, yet competitive retrieval effectiveness. Our findings raise important considerations for deploying commercial embedding models and suggest measures to mitigate the risk of model theft.