🤖 AI Summary
This study investigates how humans learn and calibrate trust in the absence of ground-truth feedback, relying instead on diverse opinions from others. To address this, we propose a semi-supervised “hallucinated hedge” algorithm that models how individuals jointly weigh source accuracy and internal consistency with other reliable sources—thereby simulating cognitive integration under partial supervision. This work constitutes the first extension of the classical hedge algorithm to a semi-supervised, hallucination-based inference framework incorporating explicit internal consistency constraints. Combining behavioral experiments with computational model fitting, we demonstrate that human judgments align significantly better with predictions of our model than with those of standard hedge or multiple heuristic baselines. The results confirm that our framework more accurately captures the cognitive mechanisms underlying human integration of heterogeneous opinions under uncertainty.
📝 Abstract
Abstract Whereas cognitive models of learning often assume direct experience with both the features of an event and with a true label or outcome, much of everyday learning arises from hearing the opinions of others, without direct access to either the experience or the ground‐truth outcome. We consider how people can learn which opinions to trust in such scenarios by extending the hedge algorithm: a classic solution for learning from diverse information sources. We first introduce a semi‐supervised variant we call the delusional hedge capable of learning from both supervised and unsupervised experiences. In two experiments, we examine the alignment between human judgments and predictions from the standard hedge, the delusional hedge, and a heuristic baseline model. Results indicate that humans effectively incorporate both labeled and unlabeled information in a manner consistent with the delusional hedge algorithm—suggesting that human learners not only gauge the accuracy of information sources but also their consistency with other reliable sources. The findings advance our understanding of human learning from diverse opinions, with implications for the development of algorithms that better capture how people learn to weigh conflicting information sources.