🤖 AI Summary
This study investigates the formation mechanisms underlying university students’ trust in ChatGPT. Method: Employing a mixed-methods design—including surveys with 115 UK undergraduates and four semi-structured focus groups—we integrate a four-dimensional framework encompassing user attributes, trust dimensions, task contexts, and social cognition, conducting multivariate regression and thematic coding analyses. Contribution/Results: Key predictors of trust include task verifiability, perceived competence, ethical consistency, and prior usage experience. Confidence in citation accuracy emerged as the strongest predictor of overall trust, revealing pronounced automation bias. Students with technical backgrounds exhibited higher trust only in specific tasks—challenging technological determinism. Domain expertise and ethical risk perception were the most salient trust determinants; usability and transparency followed, whereas anthropomorphism and platform reputation showed no significant effect. Trust levels varied significantly across task types and were moderated by perceived social acceptability.
📝 Abstract
This mixed-methods inquiry examined four domains that shape university students' trust in ChatGPT: user attributes, seven delineated trust dimensions, task context, and perceived societal impact. Data were collected through a survey of 115 UK undergraduate and postgraduate students and four complementary semi-structured interviews. Behavioural engagement outweighed demographics: frequent use increased trust, whereas self-reported understanding of large-language-model mechanics reduced it. Among the dimensions, perceived expertise and ethical risk were the strongest predictors of overall trust; ease of use and transparency had secondary effects, while human-likeness and reputation were non-significant. Trust was highly task-contingent; highest for coding and summarising, lowest for entertainment and citation generation, yet confidence in ChatGPT's referencing ability, despite known inaccuracies, was the single strongest correlate of global trust, indicating automation bias. Computer-science students surpassed peers only in trusting the system for proofreading and writing, suggesting technical expertise refines rather than inflates reliance. Finally, students who viewed AI's societal impact positively reported the greatest trust, whereas mixed or negative outlooks dampened confidence. These findings show that trust in ChatGPT hinges on task verifiability, perceived competence, ethical alignment and direct experience, and they underscore the need for transparency, accuracy cues and user education when deploying LLMs in academic settings.