AlienLM: Alienization of Language for API-Boundary Privacy in Black-Box LLMs

πŸ“… 2026-01-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the critical privacy risks inherent in using black-box large language models (LLMs) via public APIs, which require uploading plaintext inputs. To mitigate this, the authors propose AlienLMβ€”a privacy-preserving framework that operates solely through standard API calls without requiring model access. AlienLM employs a vocabulary-level bijection to transform input text into an β€œalien language,” which can be losslessly reconstructed on the client side. Coupled with Alien Adaptation Training (AAT), this approach enables black-box LLMs to directly comprehend the transformed inputs. The method achieves high-fidelity privacy protection, preserving over 81% of original model performance on average across four mainstream LLMs and seven benchmark tasks, while reducing text reconstruction success under strong adversarial attacks to below 0.22%, significantly outperforming random bijection and character-level baselines.

Technology Category

Application Category

πŸ“ Abstract
Modern LLMs are increasingly accessed via black-box APIs, requiring users to transmit sensitive prompts, outputs, and fine-tuning data to external providers, creating a critical privacy risk at the API boundary. We introduce AlienLM, a deployable API-only privacy layer that protects text by translating it into an Alien Language via a vocabulary-scale bijection, enabling lossless recovery on the client side. Using only standard fine-tuning APIs, Alien Adaptation Training (AAT) adapts target models to operate directly on alienized inputs. Across four LLM backbones and seven benchmarks, AlienLM retains over 81\% of plaintext-oracle performance on average, substantially outperforming random-bijection and character-level baselines. Under adversaries with access to model weights, corpus statistics, and learning-based inverse translation, recovery attacks reconstruct fewer than 0.22\% of alienized tokens. Our results demonstrate a practical pathway for privacy-preserving LLM deployment under API-only access, substantially reducing plaintext exposure while maintaining task performance.
Problem

Research questions and friction points this paper is trying to address.

API-boundary privacy
black-box LLMs
privacy risk
sensitive data transmission
Innovation

Methods, ideas, or system contributions that make the work stand out.

Alienization
API-boundary privacy
vocabulary-scale bijection
black-box LLMs
privacy-preserving adaptation
πŸ”Ž Similar Papers
No similar papers found.