Mitigating the Threshold Priming Effect in Large Language Model-Based Relevance Judgments via Personality Infusing

📅 2025-11-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how simulated Big Five personality traits in large language models (LLMs) modulate priming effects in relevance judgment tasks. Addressing the susceptibility of LLMs to carryover bias from prior judgments, we introduce personality prompting—systematically injecting the five dimensions (Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism)—into relevance assessment. We conduct empirical evaluation across multiple models (Llama-3, Qwen, GPT-4) on the TREC 2021/2022 Deep Learning tracks. Results show that high Openness and low Neuroticism significantly attenuate priming effects; optimal personality configurations are model- and task-dependent. This study pioneers the integration of personality psychology principles into prompt engineering, establishing a novel, interpretable, and controllable paradigm for mitigating discriminative biases in LLMs.

Technology Category

Application Category

📝 Abstract
Recent research has explored LLMs as scalable tools for relevance labeling, but studies indicate they are susceptible to priming effects, where prior relevance judgments influence later ones. Although psychological theories link personality traits to such biases, it is unclear whether simulated personalities in LLMs exhibit similar effects. We investigate how Big Five personality profiles in LLMs influence priming in relevance labeling, using multiple LLMs on TREC 2021 and 2022 Deep Learning Track datasets. Our results show that certain profiles, such as High Openness and Low Neuroticism, consistently reduce priming susceptibility. Additionally, the most effective personality in mitigating priming may vary across models and task types. Based on these findings, we propose personality prompting as a method to mitigate threshold priming, connecting psychological evidence with LLM-based evaluation practices.
Problem

Research questions and friction points this paper is trying to address.

Investigates how Big Five personality traits in LLMs affect priming in relevance labeling
Examines whether certain personality profiles reduce susceptibility to priming effects
Proposes personality prompting to mitigate threshold priming in LLM-based evaluations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personality prompting reduces priming effects in LLMs
Big Five traits like High Openness lower bias susceptibility
Method adapts to different models and task types
🔎 Similar Papers
No similar papers found.
N
Nuo Chen
The Hong Kong Polytechnic University, China
H
Hanpei Fang
Waseda University, Japan
J
Jiqun Liu
The University of Oklahoma, USA
W
Wilson Wei
EureXa Labs, Singapore
Tetsuya Sakai
Tetsuya Sakai
Waseda University
information retrievalinteractionnatural language processingsocial good
X
Xiao-Ming Wu
The Hong Kong Polytechnic University, China