The Right Time Matters: Data Arrangement Affects Zero-Shot Generalization in Instruction Tuning

📅 2024-06-17
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the intrinsic mechanisms underlying zero-shot generalization in instruction tuning, revealing that it fundamentally arises from instance-level input similarity—termed “early generalization”—and is highly sensitive to training data ordering. To address this, we propose the first test-centric multi-turn arrangement framework, which integrates dynamic loss analysis, instance similarity modeling, granularity-aware sorting, and progressive training scheduling. Our method significantly enhances zero-shot generalization on unseen tasks, accelerates convergence, reduces training loss, and improves generalization stability. The core contribution is the first empirical identification of instance-level similarity—not task-level structural alignment—as the primary driver of zero-shot generalization; further, we establish data ordering as a novel, controllable lever for optimizing generalization behavior, enabling precise, target-driven adaptation without architectural or objective modifications.

Technology Category

Application Category

📝 Abstract
Understanding alignment techniques begins with comprehending zero-shot generalization brought by instruction tuning, but little of the mechanism has been understood. Existing work has largely been confined to the task level, without considering that tasks are artificially defined and, to LLMs, merely consist of tokens and representations. To bridge this gap, we investigate zero-shot generalization from the perspective of the data itself. We first demonstrate that zero-shot generalization happens very early during instruction tuning, with loss serving as a stable indicator. Next, we investigate training data arrangement through similarity and granularity perspectives, confirming that the timing of exposure to certain training examples may greatly facilitate generalization on unseen tasks. Finally, we propose a more grounded training data arrangement framework, Test-centric Multi-turn Arrangement, and show its effectiveness in promoting continual learning and further loss reduction. For the first time, we show that zero-shot generalization during instruction tuning is a form of similarity-based generalization between training and test data at the instance level. Our code is released at https://github.com/thunlp/Dynamics-of-Zero-Shot-Generalization.
Problem

Research questions and friction points this paper is trying to address.

Investigates how data arrangement affects zero-shot generalization in instruction tuning
Explores instance-level similarity between training and test data for generalization
Proposes a test-centric framework to improve continual learning and loss reduction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Early loss indicates zero-shot generalization timing
Data arrangement by similarity and granularity boosts generalization
Test-centric Multi-turn Arrangement enhances continual learning
🔎 Similar Papers
No similar papers found.