"Is it always watching? Is it always listening?" Exploring Contextual Privacy and Security Concerns Toward Domestic Social Robots

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates privacy and security risks posed by AI-powered social robots in U.S. households—including persistent environmental sensing, collection and sharing of sensitive data, and potential threats to physical safety. Through semi-structured interviews and qualitative analysis across representative domains (e.g., education and healthcare), we systematically identify users’ core requirements: transparency in data linkage, context-aware privacy controls, real-time visualization of data collection, and function-level adaptability. Our key contribution is a novel *Context-Aware Privacy Control Framework*, which dynamically modulates data permissions based on usage context. Empirical validation confirms that transparency-enhancing design and user-controllable mechanisms significantly improve trust and adoption intention. The findings provide empirically grounded, actionable guidelines for the ethical design of trustworthy domestic social robots.

Technology Category

Application Category

📝 Abstract
Equipped with artificial intelligence (AI) and advanced sensing capabilities, social robots are gaining interest among consumers in the United States. These robots seem like a natural evolution of traditional smart home devices. However, their extensive data collection capabilities, anthropomorphic features, and capacity to interact with their environment make social robots a more significant security and privacy threat. Increased risks include data linkage, unauthorized data sharing, and the physical safety of users and their homes. It is critical to investigate U.S. users' security and privacy needs and concerns to guide the design of social robots while these devices are still in the early stages of commercialization in the U.S. market. Through 19 semi-structured interviews, we identified significant security and privacy concerns, highlighting the need for transparency, usability, and robust privacy controls to support adoption. For educational applications, participants worried most about misinformation, and in medical use cases, they worried about the reliability of these devices. Participants were also concerned with the data inference that social robots could enable. We found that participants expect tangible privacy controls, indicators of data collection, and context-appropriate functionality.
Problem

Research questions and friction points this paper is trying to address.

Exploring privacy concerns with AI-powered domestic social robots
Investigating security risks from extensive data collection in social robots
Addressing user needs for transparency and privacy controls in robots
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-equipped social robots with advanced sensing
Transparency and robust privacy controls
Context-appropriate functionality indicators
🔎 Similar Papers
No similar papers found.
H
Henry Bell
Duke University
J
Jabari Kwesi
Duke University
H
Hiba Laabadli
Duke University
Pardis Emami-Naeini
Pardis Emami-Naeini
Assistant Professor, Computer Science Department, Duke University
PrivacySecurityHuman-Computer InteractionUsability