Can LLMs Simulate L2-English Dialogue? An Information-Theoretic Analysis of L1-Dependent Biases

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) can simulate second-language (L2) English conversational features shaped by first-language (L1) interference. Using multilingual prompt engineering, we elicit zero-shot L2 learner personas—representing seven L1 backgrounds (e.g., Japanese, Thai, Urdu)—from Qwen2.5, LLaMA3.3, Deepseek-V3, and GPT-4o, and compare their outputs against authentic learner corpora. We introduce novel quantitative measures—information entropy and distributional density modeling—to characterize cross-linguistic transfer phenomena, including L1-driven lexical avoidance, tense misuse, and noun-verb collocational deviations. Results demonstrate that state-of-the-art LLMs robustly replicate empirically attested human L2 error patterns—for instance, tense-aspect inconsistency among Japanese, Korean, and Chinese learners, and argument structure anomalies among Urdu learners—providing the first evidence that LLMs implicitly acquire L1–L2 transfer regularities. This work establishes a new methodological paradigm for computational second-language acquisition modeling, AI-augmented language assessment, and adaptive language instruction.

Technology Category

Application Category

📝 Abstract
This study evaluates Large Language Models' (LLMs) ability to simulate non-native-like English use observed in human second language (L2) learners interfered with by their native first language (L1). In dialogue-based interviews, we prompt LLMs to mimic L2 English learners with specific L1s (e.g., Japanese, Thai, Urdu) across seven languages, comparing their outputs to real L2 learner data. Our analysis examines L1-driven linguistic biases, such as reference word usage and avoidance behaviors, using information-theoretic and distributional density measures. Results show that modern LLMs (e.g., Qwen2.5, LLAMA3.3, DeepseekV3, GPT-4o) replicate L1-dependent patterns observed in human L2 data, with distinct influences from various languages (e.g., Japanese, Korean, and Mandarin significantly affect tense agreement, and Urdu influences noun-verb collocations). Our results reveal the potential of LLMs for L2 dialogue generation and evaluation for future educational applications.
Problem

Research questions and friction points this paper is trying to address.

Assess LLMs' ability to simulate L2 English
Analyze L1-dependent linguistic biases in LLMs
Compare LLM outputs with real L2 learner data
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs simulate L2-English dialogues
Analyze L1-dependent linguistic biases
Information-theoretic measures evaluate accuracy
🔎 Similar Papers
No similar papers found.