🤖 AI Summary
Existing trust measurement relies heavily on subjective self-reports, lacking objective, continuous, and dynamic assessment methods.
Method: This study adopts music recommendation as a paradigm and integrates multimodal neurophysiological signals—including EEG oscillatory activity, pupillary diameter, behavioral responses—and reinforcement learning modeling to enable the first neural-physiology-driven inference of dynamic trust.
Contribution/Results: We uncover how system accuracy, expected reward, and prediction error jointly modulate trust formation and decision preferences at the neural level, proposing a brain-response-based trust calibration theoretical framework. Our multimodal trust inference method demonstrates that neurophysiological metrics quantitatively capture dynamic trust fluctuations, with system accuracy exerting a significant effect on trust levels. These findings establish a novel, interpretable, and quantifiable bio-behavioral assessment paradigm for trustworthy AI systems.
📝 Abstract
As people nowadays increasingly rely on artificial intelligence (AI) to curate information and make decisions, assigning the appropriate amount of trust in automated intelligent systems has become ever more important. However, current measurements of trust in automation still largely rely on self-reports that are subjective and disruptive to the user. Here, we take music recommendation as a model to investigate the neural and cognitive processes underlying trust in automation. We observed that system accuracy was directly related to users' trust and modulated the influence of recommendation cues on music preference. Modelling users' reward encoding process with a reinforcement learning model further revealed that system accuracy, expected reward, and prediction error were related to oscillatory neural activity recorded via EEG and changes in pupil diameter. Our results provide a neurally grounded account of calibrating trust in automation and highlight the promises of a multimodal approach towards developing trustable AI systems.