🤖 AI Summary
This study investigates how ordinary users employ prompting strategies in conversational information seeking (CIS) with ChatGPT and how AI responses adapt contextually—particularly across scientific, health, and policy topics differing in controversy level. Method: A nationally representative mixed-methods study integrates dialogue log analysis, computational text complexity metrics (e.g., lexical diversity, syntactic depth), and user attitude surveys. Contribution/Results: Only a small subset of highly educated, Democratic-leaning users deploy advanced prompting techniques—revealing a novel dimension of the digital divide. ChatGPT exhibits active contextual adaptation in controversial topics, significantly increasing response cognitive complexity and external citation density. Although such high-complexity responses reduce immediate perceived preference, they foster more positive topic attitudes. This is the first empirical demonstration of co-adaptation between user prompting behavior and AI response style, offering foundational insights for CIS interface design and AI literacy interventions.
📝 Abstract
Conversational AI, such as ChatGPT, is increasingly used for information seeking. However, little is known about how ordinary users actually prompt and how ChatGPT adapts its responses in real-world conversational information seeking (CIS). In this study, a nationally representative sample of 937 U.S. adults engaged in multi-turn CIS with ChatGPT on both controversial and non-controversial topics across science, health, and policy contexts. We analyzed both user prompting strategies and the communication styles of ChatGPT responses. The findings revealed behavioral signals of digital divide: only 19.1% of users employed prompting strategies, and these users were disproportionately more educated and Democrat-leaning. Further, ChatGPT demonstrated contextual adaptation: responses to controversial topics contain more cognitive complexity and more external references than to non-controversial topics. Notably, cognitively complex responses were perceived as less favorable but produced more positive issue-relevant attitudes. This study highlights disparities in user prompting behaviors and shows how user prompts and AI responses together shape information-seeking with conversational AI.