Investigating In-Context Privacy Learning by Integrating User-Facing Privacy Tools into Conversational Agents

πŸ“… 2026-03-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the challenge that users often lack up-to-date privacy knowledge, hindering their ability to protect sensitive information during interactions with conversational agents. To mitigate this, the authors propose an in-situ privacy prompting mechanism embedded directly within the chat interface, which dynamically delivers risk warnings, protective recommendations, and contextualized just-in-time explanations whenever potentially sensitive user input is detected. Evaluated through a simulated ChatGPT platform using pre- and post-study questionnaires alongside think-aloud protocols, the intervention significantly enhanced users’ awareness of AI-related privacy risks. The findings further identify key interface features that foster proactive privacy-protective behaviors. This work establishes a novel paradigm for enabling real-time, lightweight UI interventions that support immediate learning and application of privacy knowledge during authentic human–AI interactions.

Technology Category

Application Category

πŸ“ Abstract
Supporting users in protecting sensitive information when using conversational agents (CAs) is crucial, as users may undervalue privacy protection due to outdated, partial, or inaccurate knowledge about privacy in CAs. Although privacy knowledge can be developed through standalone resources, it may not readily translate into practice and may remain detached from real-time contexts of use. In this study, we investigate in-context, experiential learning by examining how interactions with privacy tools during chatbot use enhance users' privacy learning. We also explore interface design features that facilitate engagement with these tools and learning about privacy by simulating ChatGPT's interface which we integrated with a just-in-time privacy notice panel. The panel intercepts messages containing sensitive information, warns users about potential sensitivity, offers protective actions, and provides FAQs about privacy in CAs. Participants used versions of the chatbot with and without the privacy panel across two task sessions designed to approximate realistic chatbot use. We qualitatively analyzed participants' pre- and post-test survey responses and think-aloud transcripts and describe findings related to (a) participants' perceptions of privacy before and after the task sessions and (b) interface design features that supported or hindered user-led protection of sensitive information. Finally, we discuss future directions for designing user-facing privacy tools in CAs that promote privacy learning and user engagement in protecting privacy in CAs.
Problem

Research questions and friction points this paper is trying to address.

conversational agents
privacy learning
in-context learning
user-facing privacy tools
sensitive information
Innovation

Methods, ideas, or system contributions that make the work stand out.

in-context learning
privacy tools
conversational agents
just-in-time notice
user engagement
πŸ”Ž Similar Papers
No similar papers found.
M
Mohammad Hadi Nezhad
University of Massachusetts Amherst
Francisco Enrique Vicente Castro
Francisco Enrique Vicente Castro
Research Scientist, New York University
Human-Computer InteractionComputing EthicsLearning SciencesHealth InformaticsAI Education
I
Ivon Arroyo
University of Massachusetts Amherst