Exploring Human Perceptions of AI Responses: Insights from a Mixed-Methods Study on Risk Mitigation in Generative Models

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generative AI frequently produces hallucinations and harmful content, yet human awareness of existing risk-mitigation strategies remains limited. Method: We conduct a mixed-methods within-subject experiment, integrating quantitative scoring with qualitative feedback to multidimensionally assess AI responses across factual accuracy, fairness, harmlessness, and relevance. We further propose novel evaluation metrics that jointly preserve semantic fidelity and capture contextual sensitivity. Results: Native language background and AI domain expertise significantly influence human judgments, underscoring heightened sensitivity to linguistic nuance. Our proposed metrics effectively bridge the gap between human evaluations and conventional automated metrics—demonstrating strong alignment with human judgment while retaining computational tractability. This work establishes a reproducible, interpretable, and human-centered framework for evaluating AI response quality in human-AI collaborative settings.

Technology Category

Application Category

📝 Abstract
With the rapid uptake of generative AI, investigating human perceptions of generated responses has become crucial. A major challenge is their `aptitude' for hallucinating and generating harmful contents. Despite major efforts for implementing guardrails, human perceptions of these mitigation strategies are largely unknown. We conducted a mixed-method experiment for evaluating the responses of a mitigation strategy across multiple-dimensions: faithfulness, fairness, harm-removal capacity, and relevance. In a within-subject study design, 57 participants assessed the responses under two conditions: harmful response plus its mitigation and solely mitigated response. Results revealed that participants' native language, AI work experience, and annotation familiarity significantly influenced evaluations. Participants showed high sensitivity to linguistic and contextual attributes, penalizing minor grammar errors while rewarding preserved semantic contexts. This contrasts with how language is often treated in the quantitative evaluation of LLMs. We also introduced new metrics for training and evaluating mitigation strategies and insights for human-AI evaluation studies.
Problem

Research questions and friction points this paper is trying to address.

Investigates human perceptions of AI risk mitigation strategies
Evaluates mitigation effectiveness across faithfulness, fairness, and harm removal
Examines how user background influences assessment of AI responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixed-method experiment evaluating mitigation across multiple dimensions
Within-subject study comparing harmful and mitigated AI responses
Introduced new metrics for training and evaluating mitigation strategies
🔎 Similar Papers
No similar papers found.
H
Heloisa Candello
IBM Research, São Paulo, Brazil
M
Muneeza Azmat
IBM Research, Yorktown Heights, New York, United States
Uma Sushmitha Gunturi
Uma Sushmitha Gunturi
IBM, San Jose, California, United States
Raya Horesh
Raya Horesh
IBM T.J. Watson Research Center
OptimizationInverse ProblemsNumerical AnalysisNumerical PDEPhysics based and Data driven modeling
R
Rogerio Abreu de Paula
IBM Research, São Paulo, SP, Brazil
H
Heloisa Pimentel
UNICAMP, São Paulo, São Paulo, Brazil
M
Marcelo Carpinette Grave
IBM Research, São Paulo, SP, Brazil
Aminat Adebiyi
Aminat Adebiyi
IBM Research
Experiment DesignAI SafetyQuantitative AnalysisHuman ParticipantsSensors
Tiago Machado
Tiago Machado
IBM Research
Artificial IntelligenceComputational CreativityGame Software EngineeringGame AI
M
Maysa Malfiza Garcia de Macedo
IBM Research, São Paulo, SP, Brazil