REFER: Mitigating Bias in Opinion Summarisation via Frequency Framed Prompting

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses opinion expression bias in large language models (LLMs) during opinion summarization, arising from probabilistic representations in prompts. We propose Frequency-based REpresentation FRaming (REFER), a prompting method inspired by cognitive science’s frequency format hypothesis. REFER presents opinion distributions as natural frequencies (e.g., “12 support, 8 oppose”) instead of abstract probabilities (e.g., “60% support”), requiring no prior distribution knowledge or model fine-tuning. Compared to standard probability-based prompting, REFER significantly improves summary fairness—particularly for larger LLMs and under reinforcement-augmented reasoning instructions. Our experiments evaluate multiple LLM scales and diverse prompting baselines, marking the first systematic integration of frequency representation into LLM prompt design. REFER establishes a new paradigm for fair, unsupervised, low-resource opinion summarization, offering a lightweight, generalizable solution to mitigate representational bias without architectural or training modifications.

Technology Category

Application Category

📝 Abstract
Individuals express diverse opinions, a fair summary should represent these viewpoints comprehensively. Previous research on fairness in opinion summarisation using large language models (LLMs) relied on hyperparameter tuning or providing ground truth distributional information in prompts. However, these methods face practical limitations: end-users rarely modify default model parameters, and accurate distributional information is often unavailable. Building upon cognitive science research demonstrating that frequency-based representations reduce systematic biases in human statistical reasoning by making reference classes explicit and reducing cognitive load, this study investigates whether frequency framed prompting (REFER) can similarly enhance fairness in LLM opinion summarisation. Through systematic experimentation with different prompting frameworks, we adapted techniques known to improve human reasoning to elicit more effective information processing in language models compared to abstract probabilistic representations.Our results demonstrate that REFER enhances fairness in language models when summarising opinions. This effect is particularly pronounced in larger language models and using stronger reasoning instructions.
Problem

Research questions and friction points this paper is trying to address.

Mitigating bias in opinion summarization via frequency framing
Reducing reliance on hyperparameter tuning and distributional data
Enhancing fairness in LLM outputs using cognitive science principles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frequency framed prompting reduces bias
Adapts cognitive science to language models
Enhances fairness without distributional information
🔎 Similar Papers
No similar papers found.