Diversity as a Reward: Fine-Tuning LLMs on a Mixture of Domain-Undetermined Data

📅 2025-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from limited cross-domain generalization when domain labels are missing, imprecise, or non-standardized. Method: We propose a novel fine-tuning paradigm that explicitly treats data diversity as a learnable reward signal. Our approach constructs a contrastive data pool, jointly models cross-domain and in-domain diversity metrics, designs a reinforcement-style diversity-driven sampling strategy, and introduces a dual-role LLM collaborative training framework—where one LLM selects data at the output end while the other is optimized at the input end—enabling dynamic co-evolution of data selection and model updating. Contribution/Results: This paradigm overcomes limitations of conventional static filtering and heuristic proportion estimation. It significantly improves generalization on domain-agnostic data and boosts performance across multiple downstream tasks on several state-of-the-art LLMs. The code is publicly released, validating both effectiveness and scalability.

Technology Category

Application Category

📝 Abstract
Fine-tuning large language models (LLMs) using diverse datasets is crucial for enhancing their overall performance across various domains. In practical scenarios, existing methods based on modeling the mixture proportions of data composition often struggle with data whose domain labels are missing, imprecise or non-normalized, while methods based on data selection usually encounter difficulties in balancing multi-domain performance. To address these challenges, in this paper, we study the role of data diversity in enhancing the overall abilities of LLMs by empirically constructing contrastive data pools and theoretically deriving explanations for both inter- and intra-diversity. Building upon the insights gained, we propose a new method that gives the LLM a dual identity: an output model to cognitively probe and select data based on diversity reward, as well as an input model to be tuned with the selected data. Extensive experiments show that the proposed method notably boosts performance across domain-undetermined data and a series of foundational downstream tasks when applied to various advanced LLMs. We release our code and hope this study can shed light on the understanding of data diversity and advance feedback-driven data-model co-development for LLMs.
Problem

Research questions and friction points this paper is trying to address.

enhance LLM performance
handle domain-undetermined data
improve multi-domain balancing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diverse data pools enhance LLMs
Dual identity model for data selection
Diversity reward boosts multi-domain performance
🔎 Similar Papers
No similar papers found.