How Individual Traits and Language Styles Shape Preferences In Open-ended User-LLM Interaction: A Preliminary Study

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how individual user traits—such as need for cognition and self-confidence—moderate preferences for large language model (LLM) response linguistic styles (e.g., authority, certainty, redundancy), particularly examining the “misleading preference” risk wherein stylistic appeal masks informational deficiencies. Employing a mixed-methods approach, it integrates exploratory interviews, controlled experiments, multidimensional linguistic annotation (LIWC + BERT-based certainty scoring), and hierarchical regression modeling. The study provides the first empirical evidence of significant style–trait interaction effects: users exhibit heterogeneous stylistic preferences strongly conditioned by cognitive characteristics—specifically, high-need-for-cognition individuals resist overly certain phrasing. Moving beyond unidimensional accuracy or stylistic evaluations, it reveals the dual-edged nature of style preference: while enhancing perceived credibility, it concurrently increases susceptibility to hallucination acceptance. These findings advance human-centered LLM design and risk governance with both theoretical insight and empirical grounding.

Technology Category

Application Category

📝 Abstract
What makes an interaction with the LLM more preferable for the user? While it is intuitive to assume that information accuracy in the LLM's responses would be one of the influential variables, recent studies have found that inaccurate LLM's responses could still be preferable when they are perceived to be more authoritative, certain, well-articulated, or simply verbose. These variables interestingly fall under the broader category of language style, implying that the style in the LLM's responses might meaningfully influence users' preferences. This hypothesized dynamic could have double-edged consequences: enhancing the overall user experience while simultaneously increasing their susceptibility to risks such as LLM's misinformation or hallucinations. In this short paper, we present our preliminary studies in exploring this subject. Through a series of exploratory and experimental user studies, we found that LLM's language style does indeed influence user's preferences, but how and which language styles influence the preference varied across different user populations, and more interestingly, moderated by the user's very own individual traits. As a preliminary work, the findings in our studies should be interpreted with caution, particularly given the limitations in our samples, which still need wider demographic diversity and larger sample sizes. Our future directions will first aim to address these limitations, which would enable a more comprehensive joint effect analysis between the language style, individual traits, and preferences, and further investigate the potential causal relationship between and beyond these variables.
Problem

Research questions and friction points this paper is trying to address.

How LLM language style affects user preferences
Impact of individual traits on interaction preferences
Balancing user experience and misinformation risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes LLM language style impact on preferences
Links user traits to language style preferences
Explores joint effects via experimental studies
🔎 Similar Papers
No similar papers found.