I, Robot? Socio-Technical Implications of Ultra-Personalized AI-Powered AAC; an Autoethnographic Account

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge that generic AI-based message prediction in augmentative and alternative communication (AAC) devices fails to reflect user-specific linguistic identity, resulting in frequent manual editing. To investigate the technical feasibility and sociotechnical implications of personalized language modeling, we conducted the first autoethnographic study embedding hyper-personalized AI into an AAC system. Over seven months, we collected personal communication data and fine-tuned a language model; subsequently, we deployed it for three months in real-world use, supplemented by interaction logging and reflective journaling. Results demonstrate that the customized model significantly reduces editing effort and enhances communicative efficiency. However, it simultaneously surfaces critical tensions—including blurred identity boundaries, trade-offs in privacy relinquishment, contested authorship, and threats to expressive autonomy. This work provides the first systematic empirical account of the dual nature of AI personalization in AAC contexts, offering foundational evidence and theoretical insights for human-centered design and ethical governance of assistive AI.

Technology Category

Application Category

📝 Abstract
Generic AI auto-complete for message composition often fails to capture the nuance of personal identity, requiring significant editing. While harmless in low-stakes settings, for users of Augmentative and Alternative Communication (AAC) devices, who rely on such systems for everyday communication, this editing burden is particularly acute. Intuitively, the need for edits would be lower if language models were personalized to the communication of the specific user. While technically feasible, such personalization raises socio-technical questions: what are the implications of logging one's own conversations, and how does personalization affect privacy, authorship, and control? We explore these questions through an autoethnographic study in three phases: (1) seven months of collecting all the lead author's AAC communication data, (2) fine-tuning a model on this dataset, and (3) three months of daily use of personalized AI suggestions. We reflect on these phases through continuous diary entries and interaction logs. Our findings highlight the value of personalization as well as implications on privacy, authorship, and blurring the boundaries of self-expression.
Problem

Research questions and friction points this paper is trying to address.

Personalizing AI for AAC users' communication needs
Exploring privacy and authorship in personalized AI systems
Assessing socio-technical impacts of ultra-personalized AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personalized AI fine-tuning for AAC users
Autoethnographic data collection and analysis
Privacy-aware conversational AI personalization