Hide or Highlight: Understanding the Impact of Factuality Expression on User Trust

📅 2025-08-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how the presentation format of factual information affects user trust in AI, specifically whether low-factuality content should be explicitly disclosed or concealed in question-answering scenarios. Method: Leveraging large language models for factual assessment, we designed four intervention strategies—transparent labeling, attention guidance, opaque hiding, and fuzzification—and conducted a controlled human-subject experiment. Contribution/Results: Concealing or fuzzifying low-factuality content significantly increased user trust (p < 0.01) without degrading subjective perceptions of answer quality. This approach outperformed error highlighting in balancing credibility and usability. To our knowledge, this is the first systematic empirical validation demonstrating that “selective presentation” is more effective than “error explicitness” for trust calibration. The findings provide evidence-based design principles and methodological guidance for trustworthy AI interfaces, particularly in content rendering strategies that prioritize both reliability and user experience.

Technology Category

Application Category

📝 Abstract
Large language models are known to produce outputs that are plausible but factually incorrect. To prevent people from making erroneous decisions by blindly trusting AI, researchers have explored various ways of communicating factuality estimates in AI-generated outputs to end-users. However, little is known about whether revealing content estimated to be factually incorrect influences users' trust when compared to hiding it altogether. We tested four different ways of disclosing an AI-generated output with factuality assessments: transparent (highlights less factual content), attention (highlights factual content), opaque (removes less factual content), ambiguity (makes less factual content vague), and compared them with a baseline response without factuality information. We conducted a human subjects research (N = 148) using the strategies in question-answering scenarios. We found that the opaque and ambiguity strategies led to higher trust while maintaining perceived answer quality, compared to the other strategies. We discuss the efficacy of hiding presumably less factual content to build end-user trust.
Problem

Research questions and friction points this paper is trying to address.

How factuality expression affects user trust in AI outputs
Comparing strategies for disclosing factual inaccuracies in AI responses
Evaluating user trust when hiding versus highlighting less factual content
Innovation

Methods, ideas, or system contributions that make the work stand out.

Highlights factual content to enhance trust
Removes less factual content to maintain quality
Makes less factual content vague strategically
🔎 Similar Papers
No similar papers found.