🤖 AI Summary
Generative AI frequently produces hallucinations and harmful content, yet human awareness of existing risk-mitigation strategies remains limited. Method: We conduct a mixed-methods within-subject experiment, integrating quantitative scoring with qualitative feedback to multidimensionally assess AI responses across factual accuracy, fairness, harmlessness, and relevance. We further propose novel evaluation metrics that jointly preserve semantic fidelity and capture contextual sensitivity. Results: Native language background and AI domain expertise significantly influence human judgments, underscoring heightened sensitivity to linguistic nuance. Our proposed metrics effectively bridge the gap between human evaluations and conventional automated metrics—demonstrating strong alignment with human judgment while retaining computational tractability. This work establishes a reproducible, interpretable, and human-centered framework for evaluating AI response quality in human-AI collaborative settings.
📝 Abstract
With the rapid uptake of generative AI, investigating human perceptions of generated responses has become crucial. A major challenge is their `aptitude' for hallucinating and generating harmful contents. Despite major efforts for implementing guardrails, human perceptions of these mitigation strategies are largely unknown. We conducted a mixed-method experiment for evaluating the responses of a mitigation strategy across multiple-dimensions: faithfulness, fairness, harm-removal capacity, and relevance. In a within-subject study design, 57 participants assessed the responses under two conditions: harmful response plus its mitigation and solely mitigated response. Results revealed that participants' native language, AI work experience, and annotation familiarity significantly influenced evaluations. Participants showed high sensitivity to linguistic and contextual attributes, penalizing minor grammar errors while rewarding preserved semantic contexts. This contrasts with how language is often treated in the quantitative evaluation of LLMs. We also introduced new metrics for training and evaluating mitigation strategies and insights for human-AI evaluation studies.