🤖 AI Summary
This work addresses the challenge of preserving privacy in in-context learning, where demonstration examples often contain sensitive information. Existing differentially private methods either incur substantial computational overhead or suffer from limited utility. To overcome these limitations, this study introduces the Product of Experts framework into private in-context learning for the first time, proposing a theoretically sound algorithm that enables parallelized inference while providing strong differential privacy guarantees. Experimental results across five diverse tasks—including text classification, mathematical reasoning, and vision-language understanding—demonstrate that the proposed method improves average accuracy by over 30 percentage points compared to current privacy-preserving approaches, achieving both high efficiency and robust privacy protection.
📝 Abstract
In-context learning (ICL) enables Large Language Models (LLMs) to adapt to new tasks with only a small set of examples at inference time, thereby avoiding task-specific fine-tuning. However, in-context examples may contain privacy-sensitive information that should not be revealed through model outputs. Existing differential privacy (DP) approaches to ICL are either computationally expensive or rely on heuristics with limited effectiveness, including context oversampling, synthetic data generation, or unnecessary thresholding. We reformulate private ICL through the lens of a Product-of-Experts model. This gives a theoretically grounded framework, and the algorithm can be trivially parallelized. We evaluate our method across five datasets in text classification, math, and vision-language. We find that our method improves accuracy by more than 30 percentage points on average compared to prior DP-ICL methods, while maintaining strong privacy guarantees.