🤖 AI Summary
This paper interrogates the normative justification—not merely technical feasibility—of aligning large language models (LLMs) with linguistically diverse users, including those varying by age, gender, and multilingual background. Method: Drawing on sociolinguistic theory and integrating human-computer interaction and AI ethics perspectives, it conducts a critical conceptual analysis without empirical modeling. Contribution/Results: The study offers the first systematic critique of the value assumptions and latent risks underlying linguistic alignment, warning that uncritical adaptation may reinforce biases and undermine model generalizability and fairness. It proposes three principled guidelines for inclusive LLM design—decentered linguistic standards, a dynamic view of linguistic competence, and sensitivity to structural inequities—alongside actionable evaluation dimensions. These contributions establish a theoretical foundation for ethically bounded linguistic alignment in LLM development.
📝 Abstract
We discuss how desirable it is that Large Language Models (LLMs) be able to adapt or align their language behavior with users who may be diverse in their language use. User diversity may come about among others due to i) age differences; ii) gender characteristics, and/or iii) multilingual experience, and associated differences in language processing and use. We consider potential consequences for usability, communication, and LLM development.