Understanding Trust Toward Human versus AI-generated Health Information through Behavioral and Physiological Sensing

📅 2025-12-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates public trust disparities between AI-generated and human-authored health information, particularly examining how transparency labels (e.g., “AI-generated”) moderate trust under conditions of informational inaccuracy. Employing a mixed-methods approach, it integrates an online survey with a controlled laboratory experiment, concurrently collecting multimodal physiological signals—including eye-tracking, electrocardiography (ECG), electrodermal activity (EDA), and skin temperature—alongside behavioral data. It introduces the first biometrically grounded trust prediction model, revealing that label presence exerts a stronger effect on trust than the actual source: AI-authored content is inherently trusted more than human-authored content, yet labeling it as “AI-generated” significantly reduces perceived trustworthiness. The model achieves 73% accuracy in predicting subjective trust and 65% accuracy in classifying information provenance using physiological and behavioral features. These findings establish the “verifiability of trust” as a novel theoretical paradigm, offering empirical support and methodological innovation for transparent AI health communication and trustworthy human–AI collaboration.

Technology Category

Application Category

📝 Abstract
As AI-generated health information proliferates online and becomes increasingly indistinguishable from human-sourced information, it becomes critical to understand how people trust and label such content, especially when the information is inaccurate. We conducted two complementary studies: (1) a mixed-methods survey (N=142) employing a 2 (source: Human vs. LLM) $ imes$ 2 (label: Human vs. AI) $ imes$ 3 (type: General, Symptom, Treatment) design, and (2) a within-subjects lab study (N=40) incorporating eye-tracking and physiological sensing (ECG, EDA, skin temperature). Participants were presented with health information varying by source-label combinations and asked to rate their trust, while their gaze behavior and physiological signals were recorded. We found that LLM-generated information was trusted more than human-generated content, whereas information labeled as human was trusted more than that labeled as AI. Trust remained consistent across information types. Eye-tracking and physiological responses varied significantly by source and label. Machine learning models trained on these behavioral and physiological features predicted binary self-reported trust levels with 73% accuracy and information source with 65% accuracy. Our findings demonstrate that adding transparency labels to online health information modulates trust. Behavioral and physiological features show potential to verify trust perceptions and indicate if additional transparency is needed.
Problem

Research questions and friction points this paper is trying to address.

Investigates trust in AI vs human health information
Examines how labels affect trust in online health content
Uses behavioral and physiological data to predict trust levels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combined survey and physiological sensing to assess trust
Used eye-tracking and ECG/EDA to measure responses
Applied machine learning to predict trust from features