Evolution without Large Models: Training Language Model with Task Principles

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) used for data augmentation incur substantial carbon emissions and pose privacy risks due to potential training-data leakage from closed-source models. To address these challenges, this paper proposes a decoupled lightweight training paradigm—“principle distillation → instance generation”—where LLMs are employed solely to distill high-level, task-specific principles (e.g., reasoning logic, structural constraints), while small language models (SLMs) autonomously generate high-quality training instances guided by those principles. This design eliminates end-to-end LLM involvement in data generation, thereby significantly reducing computational overhead and mitigating data privacy concerns. Experiments demonstrate that, under equivalent data scale, our method improves SLM accuracy by an average of 4.2% across multiple tasks, while cutting training-related carbon emissions by 67%, outperforming both end-to-end LLM-based augmentation and purely SLM-based baselines.

Technology Category

Application Category

📝 Abstract
A common training approach for language models involves using a large-scale language model to expand a human-provided dataset, which is subsequently used for model training.This method significantly reduces training costs by eliminating the need for extensive human data annotation. However, it still faces challenges such as high carbon emissions during data augmentation and the risk of data leakage when we use closed-source LLMs. To address these issues, we propose a self-evolution method for language models. First, we introduce the Multi-level Principle Generation, which enables a large-scale model to summarize task-completion principles based on a small amount of task data. Then, we propose the Principle-based Instance Generation, in which a smaller-scale language model uses these task principles to generate a large amount of data. This data is then used for model training. Experimental results show that our proposed method significantly improves model performance compared to directly using a smaller-scale language model to generate data. Additionally, since we only use the large-scale language model to generate the task-completion principles, the carbon emissions associated with training the model are greatly reduced.
Problem

Research questions and friction points this paper is trying to address.

Reduce carbon emissions in language model training
Avoid data leakage from closed-source LLMs
Improve performance of small-scale language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-level Principle Generation for task principles
Principle-based Instance Generation for data creation
Reduced carbon emissions with smaller-scale models
🔎 Similar Papers
No similar papers found.