Enhancing Diversity in Large Language Models via Determinantal Point Processes

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) often suffer from output homogenization and reduced semantic diversity after supervised fine-tuning (SFT) or reinforcement learning (RL), with existing approaches largely confined to inference-time interventions or lexical-level diversification. To address this, we propose Determinantal Quality Optimization (DQO), a training-stage framework that jointly optimizes generation quality and semantic diversity. DQO is the first method to integrate determinantal point processes (DPPs) into LLM fine-tuning: it constructs a kernel similarity matrix over response embeddings for a given prompt and uses its determinant to quantify the geometric volume spanned by multiple responses in embedding space—thereby explicitly modeling semantic dissimilarity. Compatible with both SFT and RL paradigms, DQO significantly enhances semantic diversity across instruction following, summarization, story generation, and reasoning tasks, while preserving—and often surpassing—the baseline models’ generation quality, effectively alleviating the diversity–quality trade-off.

Technology Category

Application Category

📝 Abstract
Supervised fine-tuning and reinforcement learning are two popular methods for post-training large language models (LLMs). While improving the model's performance on downstream tasks, they often reduce the model's output diversity, leading to narrow, canonical responses. Existing methods to enhance diversity are limited, either by operating at inference time or by focusing on lexical differences. We propose a novel training method named DQO based on determinantal point processes (DPPs) to jointly optimize LLMs for quality and semantic diversity. Our approach samples and embeds a group of responses for each prompt, then uses the determinant of a kernel-based similarity matrix to measure diversity as the volume spanned by the embeddings of these responses. Experiments across instruction-following, summarization, story generation, and reasoning tasks demonstrate that our method substantially improves semantic diversity without sacrificing model quality.
Problem

Research questions and friction points this paper is trying to address.

Enhancing output diversity in large language models
Overcoming narrow canonical responses from fine-tuning
Balancing quality and semantic diversity in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

DPP-based training for quality and diversity
Kernel similarity matrix measures semantic diversity
Improves diversity without sacrificing model quality
🔎 Similar Papers
No similar papers found.