🤖 AI Summary
This study presents the first systematic investigation into the impact of large language models (LLMs) on user attitudes within the context of intimate relationship advice. Employing a pretest–posttest experimental design combined with survey questionnaires, the research evaluates users’ perceptions of satisfaction, reliability, and helpfulness regarding personalized emotional advice generated by LLMs, while also measuring shifts in their attitudes toward LLMs before and after interaction. The findings reveal that users express high satisfaction with the LLM-generated advice and exhibit significantly enhanced trust and more favorable attitudes toward LLMs following use. These results underscore the critical role of contextualized support in fostering user trust and provide empirical evidence supporting the deployment of LLMs in highly sensitive social domains.
📝 Abstract
Large Language Models (LLMs) are increasingly being used to provide support and advice in personal domains such as romantic relationships, yet little is known about user perceptions of this type of advice. This study investigated how people evaluate advice on LLM-generated romantic relationships. Participants rated advice satisfaction, model reliability, and helpfulness, and completed pre- and post-measures of their general attitudes toward LLMs. Overall, the results showed participants'high satisfaction with LLM-generated advice. Greater satisfaction was, in turn, strongly and positively associated with their perceptions of the models'reliability and helpfulness. Importantly, participants'attitudes toward LLMs improved significantly after exposure to the advice, suggesting that supportive and contextually relevant advice can enhance users'trust and openness toward these AI systems.