🤖 AI Summary
This study examines whether large language model (LLM)-driven dialogue systems have genuinely advanced our understanding of human linguistic competence. By systematically tracing the evolution of natural language processing—from early rule-based systems to contemporary LLMs—and juxtaposing this trajectory with theoretical frameworks from linguistics and cognitive science concerning the mental mechanisms underlying human language, the paper reveals that despite remarkable progress in generative capabilities, current technologies have not substantively deepened our grasp of the nature of human language. The work’s key contribution lies in establishing an integrative, interdisciplinary analytical framework that explicitly identifies a profound disconnect between artificial language modeling and human language comprehension, thereby charting a path for future research that balances technical performance with scientific explanatory power.
📝 Abstract
In this paper, we discuss the relationship between natural language processing by computers (NLP) and the understanding of the human language capacity, as studied by linguistics and cognitive science. We outline the evolution of NLP from its beginnings until the age of large language models, and highlight for each of its main paradigms some similarities and differences with theories of the human language capacity. We conclude that the evolution of language technology has not substantially deepened our understanding of how human minds process natural language, despite the impressive language abilities attained by current chatbots using artificial neural networks.