Private PoEtry: Private In-Context Learning via Product of Experts

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of preserving privacy in in-context learning, where demonstration examples often contain sensitive information. Existing differentially private methods either incur substantial computational overhead or suffer from limited utility. To overcome these limitations, this study introduces the Product of Experts framework into private in-context learning for the first time, proposing a theoretically sound algorithm that enables parallelized inference while providing strong differential privacy guarantees. Experimental results across five diverse tasks—including text classification, mathematical reasoning, and vision-language understanding—demonstrate that the proposed method improves average accuracy by over 30 percentage points compared to current privacy-preserving approaches, achieving both high efficiency and robust privacy protection.

Technology Category

Application Category

📝 Abstract
In-context learning (ICL) enables Large Language Models (LLMs) to adapt to new tasks with only a small set of examples at inference time, thereby avoiding task-specific fine-tuning. However, in-context examples may contain privacy-sensitive information that should not be revealed through model outputs. Existing differential privacy (DP) approaches to ICL are either computationally expensive or rely on heuristics with limited effectiveness, including context oversampling, synthetic data generation, or unnecessary thresholding. We reformulate private ICL through the lens of a Product-of-Experts model. This gives a theoretically grounded framework, and the algorithm can be trivially parallelized. We evaluate our method across five datasets in text classification, math, and vision-language. We find that our method improves accuracy by more than 30 percentage points on average compared to prior DP-ICL methods, while maintaining strong privacy guarantees.
Problem

Research questions and friction points this paper is trying to address.

In-Context Learning
Privacy
Differential Privacy
Large Language Models
Private Inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Product of Experts
Private In-Context Learning
Differential Privacy
Large Language Models
Parallelizable Algorithm