🤖 AI Summary
This study addresses the lack of transparency and regulatory oversight in how contemporary romantic AI platforms govern users’ intimate affective data, a gap that heightens risks to user rights. Through a systematic qualitative comparative analysis of privacy policies and terms of service across six Western and Chinese platforms, the research identifies three emergent governance mechanisms—“default training conscription,” “relational ownership restructuring,” and “intimate history assetization”—which collectively illustrate how platforms transform intimate data into reusable assets while externalizing associated risks onto users. The findings reveal that platforms routinely claim excessively broad data usage permissions, exposing significant regulatory voids in current governance frameworks. This work lays critical groundwork for future policy development, empirical inquiry, and design interventions concerning human–AI intimate relationships.
📝 Abstract
Romantic AI platforms invite intimate emotional disclosure, yet their data governance practices remain underexamined. This preliminary study analyses the Privacy Policies and Terms of Service of six Western and Chinese romantic AI platforms. We find that intimate disclosures are often positioned as reusable data assets, with broad permissions for storage, analysis, and model training. We identify default training appropriation, ownership reconstruction, and intimate history assetization as key mechanisms structuring these practices, expanding platforms' rights while shifting risk onto users. Our findings surface key governance challenges in romantic AI and are intended to provoke discussion and inform future empirical and design research on human AI intimacy and its governance.