Unsupervised Morphological Tree Tokenizer

๐Ÿ“… 2024-06-21
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Conventional statistical subword tokenization methods (e.g., BPE, WordPiece) often split morphemes internally, undermining morphological and semantic integrity. Method: We propose an unsupervised character-level tree-based tokenization framework guided by linguistic morphological structure. First, we introduce MorphOverridingโ€”a mechanism enforcing morpheme atomicity and indivisibility. Second, we design a deep encoder that jointly models intra-word hierarchical structure and contextual representations. Third, we employ self-supervised pretraining coupled with top-down lexical matching for end-to-end tokenization. Contribution/Results: To our knowledge, this is the first fully unsupervised method that automatically induces linguistically plausible vocabulary trees adhering to morphological principles. It significantly outperforms BPE and WordPiece on both morphological segmentation and downstream language modeling tasks, while fully preserving morpheme boundaries and derivational/inflectional hierarchies.

Technology Category

Application Category

๐Ÿ“ Abstract
As a cornerstone in language modeling, tokenization involves segmenting text inputs into pre-defined atomic units. Conventional statistical tokenizers often disrupt constituent boundaries within words, thereby corrupting semantic information. To address this drawback, we introduce morphological structure guidance to tokenization and propose a deep model to induce character-level structures of words. Specifically, the deep model jointly encodes internal structures and representations of words with a mechanism named $ extit{MorphOverriding}$ to ensure the indecomposability of morphemes. By training the model with self-supervised objectives, our method is capable of inducing character-level structures that align with morphological rules without annotated training data. Based on the induced structures, our algorithm tokenizes words through vocabulary matching in a top-down manner. Empirical results indicate that the proposed method effectively retains complete morphemes and outperforms widely adopted methods such as BPE and WordPiece on both morphological segmentation tasks and language modeling tasks. The code will be released later.
Problem

Research questions and friction points this paper is trying to address.

Improves tokenization by preserving word morphological structure
Addresses semantic corruption in conventional statistical tokenizers
Induces character-level structures without annotated training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses morphological structure guidance for tokenization
Employs MorphOverriding to preserve morpheme indecomposability
Self-supervised training induces character-level morphological structures
๐Ÿ”Ž Similar Papers
No similar papers found.
Q
Qingyang Zhu
ShanghaiTech University
X
Xiang Hu
Ant Group
P
Pengyu Ji
ShanghaiTech University
W
Wei Wu
Ant Group
Kewei Tu
Kewei Tu
School of Information Science and Technology, ShanghaiTech University, China
Natural Language ProcessingMachine Learning