π€ AI Summary
This work addresses the critical privacy risks inherent in using black-box large language models (LLMs) via public APIs, which require uploading plaintext inputs. To mitigate this, the authors propose AlienLMβa privacy-preserving framework that operates solely through standard API calls without requiring model access. AlienLM employs a vocabulary-level bijection to transform input text into an βalien language,β which can be losslessly reconstructed on the client side. Coupled with Alien Adaptation Training (AAT), this approach enables black-box LLMs to directly comprehend the transformed inputs. The method achieves high-fidelity privacy protection, preserving over 81% of original model performance on average across four mainstream LLMs and seven benchmark tasks, while reducing text reconstruction success under strong adversarial attacks to below 0.22%, significantly outperforming random bijection and character-level baselines.
π Abstract
Modern LLMs are increasingly accessed via black-box APIs, requiring users to transmit sensitive prompts, outputs, and fine-tuning data to external providers, creating a critical privacy risk at the API boundary. We introduce AlienLM, a deployable API-only privacy layer that protects text by translating it into an Alien Language via a vocabulary-scale bijection, enabling lossless recovery on the client side. Using only standard fine-tuning APIs, Alien Adaptation Training (AAT) adapts target models to operate directly on alienized inputs. Across four LLM backbones and seven benchmarks, AlienLM retains over 81\% of plaintext-oracle performance on average, substantially outperforming random-bijection and character-level baselines. Under adversaries with access to model weights, corpus statistics, and learning-based inverse translation, recovery attacks reconstruct fewer than 0.22\% of alienized tokens. Our results demonstrate a practical pathway for privacy-preserving LLM deployment under API-only access, substantially reducing plaintext exposure while maintaining task performance.