🤖 AI Summary
To address optimization difficulties and degraded generalization in multi-task fine-tuning of large language models caused by gradient conflicts, this paper proposes a gradient-driven expert adapter framework. First, instruction-based tasks are semantically grouped via clustering in gradient space according to directional similarity of their task-specific gradients. Second, lightweight LoRA-based expert adapters are trained independently for each cluster. Finally, during inference, an input sample dynamically triggers a weighted ensemble of the most relevant experts, determined by its gradient similarity to each cluster’s representative gradient direction. This work introduces—novelty—the first gradient-directional clustering mechanism and a gradient-similarity-aware adaptive ensemble strategy. The approach maintains high training efficiency and parameter efficiency (incurring only LoRA-level overhead) while significantly outperforming full-model LoRA baselines and existing ensemble methods across diverse downstream tasks, effectively balancing task specialization and cross-task generalization.
📝 Abstract
The training and fine-tuning of large language models (LLMs) often involve diverse textual data from multiple sources, which poses challenges due to conflicting gradient directions, hindering optimization and specialization. These challenges can undermine model generalization across tasks, resulting in reduced downstream performance. Recent research suggests that fine-tuning LLMs on carefully selected, task-specific subsets of data can match or even surpass the performance of using the entire dataset. Building on these insights, we propose the Ensembles of Low-Rank Expert Adapters (ELREA) framework to improve the model's capability to handle diverse tasks. ELREA clusters the training instructions based on their gradient directions, representing different areas of expertise and thereby reducing conflicts during optimization. Expert adapters are then trained on these clusters, utilizing the low-rank adaptation (LoRA) technique to ensure training efficiency and model scalability. During inference, ELREA combines predictions from the most relevant expert adapters based on the input data's gradient similarity to the training clusters, ensuring optimal adapter selection for each task. Experiments show that our method outperforms baseline LoRA adapters trained on the full dataset and other ensemble approaches with similar training and inference complexity across a range of domain-specific tasks.