Benchmarking LLM Privacy Recognition for Social Robot Decision Making

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the privacy awareness of mainstream large language models (LLMs) regarding sensitive personal data in domestic social robot scenarios, focusing on the tension between utility and privacy risk. Method: Drawing on Contextual Integrity theory—the first systematic application of this framework to AI privacy assessment—we construct ecologically valid, privacy-sensitive household interaction scenarios. We conduct a user survey (N=450), analyze responses from 10 state-of-the-art LLMs, and evaluate four prompting strategies to quantify alignment between human and LLM privacy judgments. Contribution/Results: Human–LLM agreement on privacy sensitivity is significantly low; conventional prompting techniques yield minimal improvement in privacy recognition. We propose the first evaluation framework for LLM privacy awareness specifically tailored to social robots and empirically expose structural limitations in current LLMs’ privacy-sensitive decision-making—providing both theoretical grounding and empirical warnings for designing trustworthy domestic AI systems.

Technology Category

Application Category

📝 Abstract
Social robots are embodied agents that interact with people while following human communication norms. These robots interact using verbal and non-verbal cues, and share the physical environments of people. While social robots have previously utilized rule-based systems or probabilistic models for user interaction, the rapid evolution of large language models (LLMs) presents new opportunities to develop LLM-empowered social robots for enhanced human-robot interaction. To fully realize these capabilities, however, robots need to collect data such as audio, fine-grained images, video, and locations. As a result, LLMs often process sensitive personal information, particularly within home environments. Given the tension between utility and privacy risks, evaluating how current LLMs manage sensitive data is critical. Specifically, we aim to explore the extent to which out-of-the-box LLMs are privacy-aware in the context of household social robots. In this study, we present a set of privacy-relevant scenarios crafted through the lens of Contextual Integrity (CI). We first survey users' privacy preferences regarding in-home social robot behaviors and then examine how their privacy orientation affects their choices of these behaviors (N = 450). We then provide the same set of scenarios and questions to state-of-the-art LLMs (N = 10) and find that the agreement between humans and LLMs is low. To further investigate the capabilities of LLMs as a potential privacy controller, we implement four additional prompting strategies and compare their results. Finally, we discuss the implications and potential of AI privacy awareness in human-robot interaction.
Problem

Research questions and friction points this paper is trying to address.

Assessing privacy awareness of LLMs in social robots
Evaluating human-LLM agreement on privacy preferences
Exploring LLMs as potential privacy controllers
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-empowered social robots for interaction
Privacy evaluation using Contextual Integrity scenarios
Prompting strategies for AI privacy control