Reliability-Aware Determinantal Point Processes for Robust Informative Data Selection in Large Language Models

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of maintaining stability in large language model fine-tuning under unreliable conditions—such as storage failures and communication errors—where conventional data selection methods often falter. The authors propose ProbDPP, the first framework to integrate data access reliability into Determinantal Point Processes (DPPs). By introducing a regularized objective that jointly optimizes geometric diversity and reliability-aware costs, the approach formulates data selection as a combinatorial semi-bandit problem and devises a UCB-style online learning algorithm. ProbDPP enables robust selection of diverse data subsets even when reliability information is unknown. Theoretical analysis establishes a bounded regret guarantee, demonstrating significant improvements in both the robustness and deployment efficiency of data selection for fine-tuning.

Technology Category

Application Category

📝 Abstract
Informative data selection is a key requirement for large language models (LLMs) to minimize the amount of data required for fine-tuning, network distillation, and token pruning, enabling fast and efficient deployment, especially under computational and communication constraints. Traditional subset selection methods, including those based on Determinantal Point Processes (DPP), focus on maximizing diversity but assume that selected data batches are always available error-free. This presumption prohibits their use under partial storage outage, imperfect communication, and stochastic access failures. Furthermore, we show that the original formulation collapses under such conditions. To address this gap, we introduce ProbDPP, a novel reliability-aware implementation of k-DPP that accounts for probabilistic data access by recasting the objective function with a regularization term that remains well-posed and decomposes into a geometric diversity term and unreliability cost. The resulting objective facilitates robust selection of diverse data batches under uncertainty. Furthermore, we frame this reliability-aware diversity maximization as a combinatorial semi-bandit problem and propose a UCB-style algorithm to efficiently learn the unknown reliability online. Theoretical analysis provides regret bounds for the proposed approach, ensuring performance guarantees.
Problem

Research questions and friction points this paper is trying to address.

Informative Data Selection
Determinantal Point Processes
Reliability
Large Language Models
Data Uncertainty
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reliability-aware DPP
ProbDPP
Informative data selection
Combinatorial semi-bandit
Robust subset selection
🔎 Similar Papers
No similar papers found.