Disentangling Latent Shifts of In-Context Learning Through Self-Training

📅 2024-10-02
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the inference instability and poor cross-domain generalization of large language models (LLMs) in in-context learning (ICL), exacerbated by increasing prompt length. To tackle these issues, we propose Self-Training ICL (STICL), the first ICL method to incorporate self-training. STICL employs a teacher–student framework that jointly performs pseudo-label generation and adapter-based encoding to explicitly decouple latent distribution shifts between in-context examples and the query—without fine-tuning the backbone LLM. By enabling “weak-to-strong” progressive generalization, STICL significantly outperforms standard ICL and existing distribution-decoupling approaches on both in-domain and cross-domain benchmarks. Experimental results demonstrate substantial improvements in inference stability and generalization robustness, validating STICL’s effectiveness in mitigating prompt-induced degradation while preserving zero-shot adaptability.

Technology Category

Application Category

📝 Abstract
In-context learning (ICL) has become essential in natural language processing, particularly with autoregressive large language models capable of learning from demonstrations provided within the prompt. However, ICL faces challenges with stability and long contexts, especially as the number of demonstrations grows, leading to poor generalization and inefficient inference. To address these issues, we introduce STICL (Self-Training ICL), an approach that disentangles the latent shifts of demonstrations from the latent shift of the query through self-training. STICL employs a teacher model to generate pseudo-labels and trains a student model using these labels, encoded in an adapter module. The student model exhibits weak-to-strong generalization, progressively refining its predictions over time. Our empirical results show that STICL improves generalization and stability, consistently outperforming traditional ICL methods and other disentangling strategies across both in-domain and out-of-domain data.
Problem

Research questions and friction points this paper is trying to address.

Addressing instability in in-context learning with weak supervision
Disentangling demonstration-induced shifts from query processing
Improving generalization and efficiency across domain tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weak supervision disentangles latent shifts
Lightweight adapter captures demonstration effects
Pseudo-label correction enables strong generalization
🔎 Similar Papers
No similar papers found.
J
Josip Jukic
TakeLab, Faculty of Electrical Engineering and Computing, University of Zagreb, Croatia
Jan Snajder
Jan Snajder
Prof of Computer Science at Uni Zagreb, FER, Croatia
Natural Language ProcessingSemanticsInformation Extraction