Federated In-Context Learning: Iterative Refinement for Improved Answer Quality

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In privacy-sensitive question-answering (QA) tasks, in-context learning (ICL) suffers from scarce high-quality demonstration examples due to data privacy constraints, prohibitive annotation costs, and distributional shifts; meanwhile, existing federated learning approaches incur excessive communication overhead and underutilize local client data. Method: We propose Federated In-Context Learning (Fed-ICL), the first parameter-free distributed ICL framework that jointly optimizes prompts and responses across clients and server—without transmitting model parameters—via iterative collaborative refinement. Contribution/Results: Fed-ICL theoretically guarantees convergence and achieves answer quality on par with centralized ICL on standard QA benchmarks, while reducing total communication cost by over 60%. It simultaneously enhances privacy preservation and system efficiency, establishing a new trade-off frontier for privacy-aware, resource-efficient ICL in decentralized settings.

Technology Category

Application Category

📝 Abstract
For question-answering (QA) tasks, in-context learning (ICL) enables language models to generate responses without modifying their parameters by leveraging examples provided in the input. However, the effectiveness of ICL heavily depends on the availability of high-quality examples, which are often scarce due to data privacy constraints, annotation costs, and distribution disparities. A natural solution is to utilize examples stored on client devices, but existing approaches either require transmitting model parameters - incurring significant communication overhead - or fail to fully exploit local datasets, limiting their effectiveness. To address these challenges, we propose Federated In-Context Learning (Fed-ICL), a general framework that enhances ICL through an iterative, collaborative process. Fed-ICL progressively refines responses by leveraging multi-round interactions between clients and a central server, improving answer quality without the need to transmit model parameters. We establish theoretical guarantees for the convergence of Fed-ICL and conduct extensive experiments on standard QA benchmarks, demonstrating that our proposed approach achieves strong performance while maintaining low communication costs.
Problem

Research questions and friction points this paper is trying to address.

Improves answer quality in federated question-answering tasks
Reduces communication overhead by avoiding parameter transmission
Addresses data scarcity and privacy constraints in federated learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated learning without parameter transmission
Iterative refinement via client-server interactions
Improves answer quality with local datasets
🔎 Similar Papers
No similar papers found.