🤖 AI Summary
This study investigates how non-experts adjust their opinions and decisions on specialized topics—such as chemistry—in response to AI-generated content, with a focus on the interplay between domain knowledge and information source attribution. Through a controlled experiment combined with survey instruments, the research systematically compares the influence of ChatGPT-generated texts against human-expert-authored content on participants’ judgments, employing a mixed-methods approach for both quantitative and qualitative analysis. Findings reveal that individuals exhibit considerable resistance to revising preexisting positions on expert subjects, and that the perceived utility of the information for decision-making does not significantly differ based on whether it is labeled as AI- or human-generated. This work provides the first systematic evidence of laypersons’ cognitive tendency to rely on external conclusions while resisting belief updating, thereby highlighting both the potential and risks of generative AI in disseminating specialized knowledge.
📝 Abstract
Modelling users'online decision-making and opinion change is a complex issue that needs to consider users'personal determinants, the nature of the topic and the information retrieval activities. Furthermore, generative-AIbased products like ChatGPT gradually become an essential element for the retrieval of online information. However, the interaction between domainspecific knowledge and AI-generated content during online decision-making is unclear. We conducted a lab-based explanatory sequential study with university students to overcome this research gap. In the experiment, we surveyed participants about a set of general domain topics that are easy to grasp and another set of domain-specific topics that require adequate levels of chemical science knowledge to fully comprehend. We provided participants with decision-supporting information that was either produced using generative AI or collected from selected expert human-written sources to explore the role of AI-generated content compared to ordinary information during decision-making. Our result revealed that participants are less likely to change opinions on domain-specific topics. Since participants without professional knowledge had difficulty performing in-depth and independent reasoning based on the information, they favoured relying on conclusions presented in the provided materials and tended to stick to their initial opinion. Besides, information that is labelled as AI-generated is equivalently helpful as information labelled as dedicatedly human-written for participants in this experiment, indicating the vast potential as well as concerns for AI replacing human experts to help users tackle professional topics or issues.