Addressing LLM Diversity by Infusing Random Concepts

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited output diversity commonly observed in large language models (LLMs) during generative tasks. To mitigate this issue, the authors propose a prompt engineering approach that injects task-irrelevant random words or sentences into the input prompt to stimulate more diverse model responses. A systematic evaluation protocol encompassing multiple diversity metrics is developed and applied across several mainstream LLMs. Experimental results demonstrate that the proposed method significantly enhances output diversity, particularly in enumeration-style tasks, thereby confirming its effectiveness and generalizability. This approach offers a simple yet efficient strategy for improving the diversity of LLM-generated content without requiring model retraining or architectural modifications.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are known to produce outputs with limited diversity. In this work, we study whether infusing random concepts in the prompts can improve the diversity of the generated outputs. To benchmark the approach, we design a systematic evaluation protocol which involves prompting an LLM with questions of the form"Name 10 Hollywood actors", and analyzing diversity measures of the resulting LLM outputs. Our experiments on multiple LLMs show that prepending random words/sentences unrelated to the prompt result in greater diversity in the outputs of LLMs. We believe that this promising result and the evaluation protocol opens up interesting avenues for future work, such as how infusing randomness into LLMs could be applied to other domains. Further, the evaluation protocol could also inspire research into benchmarking LLM diversity more systematically.
Problem

Research questions and friction points this paper is trying to address.

LLM diversity
output diversity
large language models
randomness infusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM diversity
random concept infusion
prompt engineering
diversity evaluation
large language models
🔎 Similar Papers
No similar papers found.