π€ AI Summary
This study addresses the challenge that users often lack up-to-date privacy knowledge, hindering their ability to protect sensitive information during interactions with conversational agents. To mitigate this, the authors propose an in-situ privacy prompting mechanism embedded directly within the chat interface, which dynamically delivers risk warnings, protective recommendations, and contextualized just-in-time explanations whenever potentially sensitive user input is detected. Evaluated through a simulated ChatGPT platform using pre- and post-study questionnaires alongside think-aloud protocols, the intervention significantly enhanced usersβ awareness of AI-related privacy risks. The findings further identify key interface features that foster proactive privacy-protective behaviors. This work establishes a novel paradigm for enabling real-time, lightweight UI interventions that support immediate learning and application of privacy knowledge during authentic humanβAI interactions.
π Abstract
Supporting users in protecting sensitive information when using conversational agents (CAs) is crucial, as users may undervalue privacy protection due to outdated, partial, or inaccurate knowledge about privacy in CAs. Although privacy knowledge can be developed through standalone resources, it may not readily translate into practice and may remain detached from real-time contexts of use. In this study, we investigate in-context, experiential learning by examining how interactions with privacy tools during chatbot use enhance users' privacy learning. We also explore interface design features that facilitate engagement with these tools and learning about privacy by simulating ChatGPT's interface which we integrated with a just-in-time privacy notice panel. The panel intercepts messages containing sensitive information, warns users about potential sensitivity, offers protective actions, and provides FAQs about privacy in CAs. Participants used versions of the chatbot with and without the privacy panel across two task sessions designed to approximate realistic chatbot use. We qualitatively analyzed participants' pre- and post-test survey responses and think-aloud transcripts and describe findings related to (a) participants' perceptions of privacy before and after the task sessions and (b) interface design features that supported or hindered user-led protection of sensitive information. Finally, we discuss future directions for designing user-facing privacy tools in CAs that promote privacy learning and user engagement in protecting privacy in CAs.