LLMs Know More Than Words: A Genre Study with Syntax, Metaphor & Phonetics

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) implicitly capture deep linguistic features—such as syntactic structure, metaphor density, and prosodic rhythm—and evaluates their utility in multilingual literary genre classification. Method: We propose the first multilingual literary analysis framework integrating dependency parsing, metaphor identification, and metrical scansion, validated on poetry, drama, and prose from Project Gutenberg across six languages. Classification is performed by jointly encoding explicit linguistic features and raw text as input to LLMs. Contribution/Results: Results demonstrate that LLMs do implicitly model complex linguistic structures; incorporating structured linguistic features significantly improves classification accuracy—especially for fine-grained distinctions between poetry and drama. The framework advances understanding of LLMs’ linguistic representational capacity and offers a novel pathway toward enhancing their interpretability in literary computing.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) demonstrate remarkable potential across diverse language related tasks, yet whether they capture deeper linguistic properties, such as syntactic structure, phonetic cues, and metrical patterns from raw text remains unclear. To analysis whether LLMs can learn these features effectively and apply them to important nature language related tasks, we introduce a novel multilingual genre classification dataset derived from Project Gutenberg, a large-scale digital library offering free access to thousands of public domain literary works, comprising thousands of sentences per binary task (poetry vs. novel;drama vs. poetry;drama vs. novel) in six languages (English, French, German, Italian, Spanish, and Portuguese). We augment each with three explicit linguistic feature sets (syntactic tree structures, metaphor counts, and phonetic metrics) to evaluate their impact on classification performance. Experiments demonstrate that although LLM classifiers can learn latent linguistic structures either from raw text or from explicitly provided features, different features contribute unevenly across tasks, which underscores the importance of incorporating more complex linguistic signals during model training.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLMs' ability to learn syntactic, phonetic, and metaphorical features from text.
Assesses if these linguistic features improve multilingual genre classification performance.
Investigates uneven contribution of different linguistic features across classification tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual genre dataset with explicit linguistic features
Evaluating LLMs on syntactic, metaphoric, and phonetic properties
Augmenting classification with tree structures, metaphor counts, phonetics
🔎 Similar Papers
No similar papers found.
W
Weiye Shi
Institute for Artificial Intelligence, Peking University
Zhaowei Zhang
Zhaowei Zhang
Peking University
AI GovernanceAI AlignmentGame TheoryHuman-AI Collaboration
S
Shaoheng Yan
Institute for Artificial Intelligence, Peking University
Y
Yaodong Yang
Institute for Artificial Intelligence, Peking University