🤖 AI Summary
Privacy policies are often lengthy and unintelligible, hindering users’ comprehension and assessment. To address this, we propose and implement a browser extension powered by large language models (LLMs), integrating an interactive dashboard and real-time conversational interface to support both high-level policy overviews and fine-grained, on-demand questioning. This work presents the first qualitative user study (N=22) on LLM-driven privacy policy evaluation and introduces a novel “overview + dialogue” dual-mode explainable assessment paradigm. Results demonstrate statistically significant improvements in users’ policy comprehension and privacy awareness; validate the effectiveness of the dual-mode interface; and systematically identify three critical design challenges—trustworthiness, transparency, and controllability—in LLM-augmented privacy tools. Our findings provide empirical evidence and actionable design insights for developing trustworthy AI systems that meaningfully support privacy practices.
📝 Abstract
Protecting online privacy requires users to engage with and comprehend website privacy policies, but many policies are difficult and tedious to read. We present the first qualitative user study on Large Language Model (LLM)-driven privacy policy assessment. To this end, we build and evaluate an LLM-based privacy policy assessment browser extension, which helps users understand the essence of a lengthy, complex privacy policy while browsing. The tool integrates a dashboard and an LLM chat. In our qualitative user study (N=22), we evaluate usability, understandability of the information our tool provides, and its impacts on awareness. While providing a comprehensible quick overview and a chat for in-depth discussion improves privacy awareness, users note issues with building trust in the tool. From our insights, we derive important design implications to guide future policy analysis tools.